Software
Contents
introduction
Here is the information hub for the SoLID software for physics study.
Physics event generation is a collection of independent software packages.
Detector simulation is based on Geant4 based simulation framework GEMC http://gemc.jlab.org
The instruction for installation and how to use do event generation, simulation and reconstruction are listed below.
How to use it
SoLID simulation software moved from svn to github at https://github.com/JeffersonLab/solid_gemc
It is distributed by container at https://github.com/JeffersonLab/solid_release
all SoLID related software https://github.com/orgs/JeffersonLab/teams/halla-solid/repositories
The following content is outdated as of 2019 and shall be updated
quick demo
This is the quickest way to run the simulation in graphic mode on jlab ifarm machine with the official installation. Use it to get a feeling how it works.
ssh -XY your_lab_username@ifarm (login with Xwindow forwarding enabled) xclock & (test if Xwindow forwarding works) source /group/solid/solid_svn/set_solid (setup env with official installation framework and official repo) cd $SoLID_GEMC/script (enter the official repo script dir) solid_gemc solid_SIDIS_He3_full.gcard (two graphic windows should appear, one has control and one shows the detector) solid_gemc solid_SIDIS_He3_full.gcard -USE_GUI=0 (batch mode, information shows in terminal)
If you have any problem or it runs very slow, go to Software#on_ifarm
general instruction
We have a unified framework of installing GEMC and all their dependences
all SoLID software based on the framework is in SoLID repository
You need both the framework and SoLID repository to use the software
- installation of framework
- The framework use environmental variable JLAB_ROOT to control installation path and JLAB_VERSION to control version
- See detailed instruction for installation of framework
- installation of SoLID repository
- SoLID repository is in SVN, everyone has read right, only jlab account within "solid" group can write
- checkout by "svn co https://jlabsvn.jlab.org/svnroot/solid solid_svn"
- access its WebSVN interface at https://jlabsvn.jlab.org/solid
- two ways to get auto notification about any change to SVN, use a graphic SVN client ("kdesvn" is good) or use RSS feed on the Websvn interface
- it directory structure
- "evgen", some event generators, (the others are in other repositories, see event generation section for details)
- "solid_gemc", GEMC 1.x related files for SoLID simulation
- "solid_gemc2", GEMC 2.x related files for SoLID simulation
- "subsystem" for individual subsystems
- "study" for studies involving different subsystems
- warning as of 2018 if you are trying to check out a SVN repo at anywhere under /lustre disk after login into ifarm (alias to ifarm1401 and ifarm1402), it will fail with an error due a SVN 1.7 client and NFS 3 conflict. Computer Center knows the problem and is looking into a long term solution. The temp solution is to do any SVN operation from ifarm1101 which has SVN 1.6
default version and official location
(as in 2017/05)
The default framework version is JLAB_VERSION=1.3
The default system for the default framework is CentOS7.2.1511-x86_64 on ifarm.jlab.org (alias to ifarm1401 and ifarm1402)
Linux_CentOS6.5-x86_64-gcc4.4.7 and Linux_RedHat6.9-x86_64-gcc4.4.7 are also supported for older system
(The framework was tested on many platforms, but for our use, it's only fully tested on systems similar to jlab ifarm system)
The official framework installation on jlab ifarm and jlab internal computers is at /group/solid/apps/jlab_root
The official SoLID repository checkout is at /group/solid/solid_svn, it will be be kept in sync with latest SVN.
(You should have your own checkout if you want to make any changes)
step by step guide
This is a step by step guide to setup and run simulation etc.
make sure you meet the requirement before you do anything
requirement
Do following before installation, updating and running the code
- check "echo $SHELL" to check if you are using tcsh shell. If it's not your default shell after login, first open a clean terminal and run "tcsh". In general, bash would work, but tcsh is default at jlab and what the instruction uses.
- clean your env variables. For example, remove your environment variable setup in your login script like .cshrc and .login or disable them temporally by "mv .cshrc cshrc" and "mv .login login". This is to avoid conflict from other software environment variables. Vice versa, don't put our environment variables into your login script either, set it up every time you login a terminal instead
- If you are running it in graphical mode on remote machines, make sure to turn on X11 forwarding by "ssh -X" or "ssh -Y" when connecting to remote machines. If you local computer is windows, you need to turn on X11 forwarding in putty's option and have a Xwindow server like Xming or VcXsrv running. test if "xclock" or "xterm" will work first
on ifarm
This allow users to run it on ifarm and test code before submitting jobs to jlab farm. You don't need to install the framework and you can choose to use official repo or your own repo.
if you can't ssh into ifarm, ask jlab computer center helpdesk@jlab.org to allow your account access ifarm
(additional requirement for graphic mode) If you want to run it on ifarm in graphic mode, see help at ifarm_graphic_mode first. If you want to run it on ifarm in batch mode without graphic, simply ssh into ifarm, then go to next step
(use official installation of framework with official repo which you can't modify) source /group/solid/solid_svn/set_solid
(use official installation of framework with your repo so that you can make changes) cd your_choice_of_solid_repo_path svn co https://jlabsvn.jlab.org/svnroot/solid solid_svn (checkout the repo) cd solid_svn cp set_solid set_solid_mine (create your copy of env script) edit set_solid_mine by following the instruction within, you only need to change $SoLID_GEMC source set_solid_mine (setup env with your repo) cd $SoLID_GEMC/source/$GEMC_VERSION (enter the source dir for solid_gemc) scons OPT=1 (compile solid_gemc) source set_solid_mine (optional step if solid_gemc can't be found in default PATH. if it does work for the first time, try it in a clean terminal)
(now you can run it) cd $SoLID_GEMC/script (enter the script dir) solid_gemc solid_SIDIS_He3_full.gcard (graphic mode, one windows has control and the other shows the detector) solid_gemc solid_SIDIS_He3_full.gcard -USE_GUI=0 (batch mode, information shows in terminal)
on jlab internal machine
If your machine is on jlab internal network, you can use the pre-compiled framework shared over jlab network.
more /etc/redhat-release (check if your machine is supported Software#default_version_and_official_location, otherwise you need to change or update to the support system) ls /group/solid/apps/jlab_root (check if you can access the pre-compiled framework) (if you don't have access, you have two choices 1. ask jlab computer center helpdesk@jlab.org to add it for your machine 2. use sshfs (install fuse_sshfs after turn on EPEL repo by following instruction here) su -l mkdir /group chown your_jlab_username.your_jlab_group /group exit sshfs -o workaround=all ifarm:/u/group /group )
(use official installation of framework with your repo so that you can make changes) cd your_choice_of_solid_repo_path svn co https://jlabsvn.jlab.org/svnroot/solid solid_svn (checkout the repo) cd solid_svn cp set_solid set_solid_mine (create your copy of env script) edit set_solid_mine by following the instruction within, you only need to change SoLID_GEMC source set_solid_mine (setup env with your repo) cd $SoLID_GEMC/source/$GEMC_VERSION (enter the source dir for solid_gemc) scons OPT=1 (compile solid_gemc) source set_solid_mine (optional step if solid_gemc can't be found in default PATH. if it does work for the first time, try it in a clean terminal)
(now you can run it) cd $SoLID_GEMC/script (enter the script dir) solid_gemc solid_SIDIS_He3_full.gcard (graphic mode, one windows has control and the other shows the detector) solid_gemc solid_SIDIS_He3_full.gcard -USE_GUI=0 (batch mode, information shows in terminal)
If you have any problem running official installation of framework, double check if your system is supported. It could be some needed packages were not installed on your local machine and you need to install them as root. refer to "prepare for installation" at [1]
If you have any problem running official installation of framework, go to Software#on_any_machine
on any machine
This gives you maximum freedom to use your installation of framework and your repo
install the default version of framework by following installation instruction at installation of framework cd your_choice_of_solid_repo_path svn co https://jlabsvn.jlab.org/svnroot/solid solid_svn (checkout the repo) cd solid_svn cp set_solid set_solid_mine (create your copy of env script) edit set_solid_mine by following the instruction within, you need to change SoLID_GEMC and JLAB_ROOT source set_solid_mine (setup env with your repo) Check if the version you installed need these fixes by following Jlab_software_tmp_fix cd $SoLID_GEMC/source/$GEMC_VERSION (enter the source dir for solid_gemc) scons OPT=1 (compile solid_gemc) source set_solid_mine (optional step if solid_gemc can't be found in default PATH. if it does work for the first time, try it in a clean terminal)
(now you can run it) cd $SoLID_GEMC/script (enter the script dir) solid_gemc solid_SIDIS_He3_full.gcard (graphic mode, one windows has control and the other shows the detector) solid_gemc solid_SIDIS_He3_full.gcard -USE_GUI=0 (batch mode, information shows in terminal)
on container
Coordinate System
SoLID uses the coordinate system below in lab frame for event generator output, detector arrangement and reconstruction
electron beam goes along +z axis and at x=0,y=0 The solenoid coil center at the origin (x=0,y=0,z=0),axis along electron beam y axis is vertical and +y pointing up relative to the ground x axis is horizontal and +x pointing left when riding the electron beam +x axis has phi angle 0 deg, +y axis has phi angle 90 deg phi angle coverage is from -180 to 180 which can be obtain by atan2(y,x)*180/Pi or TVector3.Phi()*180/Pi
Physics Event Generation
Detector Simulation
Event Reconstruction
Study
Info
emaillist
- mailing list solid_software@jlab.org (registration and archive)
proxy
To access outside site from ifarm
setenv http_proxy http://jprox.jlab.org:8081 setenv https_proxy https://jprox.jlab.org:8081
linux group
SoLID Linux group is "12gev_solid"
check your account groups by command "groups username" or "id username"
Please ask Ole <ole at jlab.org> to add your jlab account to the group if you want to use SoLID related computing resource at jlab.
Batch Farm Project
slurm job use "-account=halla"
(outdated) auger jobs use "PROJECT: solid"
disk space and access rule
the basics
- general
Organizing a (small) software project
https://hepsoftwarefoundation.org/training/curriculum.html
https://software-carpentry.org/
- jlab
Brad's Farm Use and Computing Resources Tips and Tricks
https://scicomp.jlab.org/docs/
https://scicomp.jlab.org/docs/JupyterHub
important things to remember
- best practice
- put source code in github and clone it on home disk for small size or work disk /work/halla/solid/$USER for large size
- put large output file on volatile /volatile/halla/solid/$USER, decide if you want to keep it, then move it to cache (the move takes no time because the two disk share same backend). Note small files are not good for volatile or cache (both on lustre system), so tar them if have to
- Before moving files to cache, read "tape use rule" below and ask Ole and Zhiwen for question
- Is my file safe?
- file on work disk has backup at /work/XXX/.zfs (old work before fall 2021 has no backup and has failed before)
- file on home has backup at /home/.snapshot
- file on group has backup at /group/.snapshot
- file on volatile has no backup and can be deleted regardless how new it is, if the disk is getting really full which happens often
- file on mss (tape) has no backup except raw data from real experiment belongs to user "halldata" and tape corruption happens
- file on cache (write-through) is copied to tape automatically after some days, except for small files below 1MB
- So use github and keep large file on volatile, then cache
Disk Space
/group/solid | long term, has backup | 200GB | for shared code, only write to here when you ask SoLID software coordinator |
/mss/halla/solid | permanent, on tape | no limit | |
/work/halla/solid | long term, but has NO backup | 5TB | for small size file, not for data |
/cache/halla/solid | write through cache for mss, short term | ??? quota ??? reserved | for large size file with auto backup to tape |
/volatile/halla/solid | short term | 30TB quota 15TB reserved | for large size file which you can afford to lose any time |
check status under "File System" at https://scicomp.jlab.org (used space is not very up2date, try to run "jvolatile info a-solid" to get latest size)
reference https://scicomp.jlab.org/docs/node/6 and https://scicomp.jlab.org/docs/node/28
Auto deletion rule
On cache and volatile, there are expected to have an expiration date of six months. Also older files will be auto deleted once quota is exceeded.
This is not a problem for cache because all files are backup on tape.
But this quota exceeding auto deletion is very dangerous for volatile because files will be deleted with no warning even when they are not very old to make room refer to [6]
tape use rule
Ask Ole and Zhiwen Zhao for any question or doubt
tape is for long term storage, so DO plan it before use
files can deleted by user owning them, but can only be moved by computer center and dir often stay.
mkdir sure your files below to group 12gev_solid group before backup
jput and jmirror are always safe to use for backup to tape directly right away.
writing to cache will delay backup to tape for some days unless followed by "jcache put". auto back takes some days to happen and some file won't be backup automatically. read this
tape location /mss/halla/solid/ is organized as following "backup" for special backup "sim" for SoLID full simulation "physics" and its sub dir for files shared among various physics programs "subsystem" and its sub dir for files shared among various subsystems "user", for any one's personal SoLID related file
use a command like this to find out the size of certain dir on tape, the output is a number in GB find /mss/halla/solid/backup/ -type f -exec grep size= {} \; | awk -F= '{ a+= $2 } END {print a/1024^3}'
Not all files in tape library are tracked by cacheManager, such as very old, never be jcached files. These files need to remove using jremove. Any files put into tape through cacheManager should use 'jcache tapeRemove'. But after jremove a file from tape, then run 'jcache list /cache/<file-path>', it will figure the file is not in tape and correct its meta data in database.
file ownership
You must belong to the group 12gev_solid for access, ask "Ole at jlab.org" to add you.
To avoid disk quota problems, please make sure that all files that you place in the SoLID directories are owned by Linux group 12gev_solid. If you've already created files that have the wrong group, just change the group ID of your directory and all its files and subdirectories to the correct one. You only should have to do this once:
chgrp -R 12gev_solid your-directory
To make this happen automatically for all newly created files in this directory, set the sgid bit on it:
chmod g+s your-directory
Better yet, set this bit for all subdirectories as well:
find your-directory -type d -exec chmod g+s {} \;
Again, you should only have to do this once.
Additionally, to reduce the chance of file access problems, every time you log in and before starting work in SoLID disk areas, switch your effective group ID to 12gev_solid:
newgrp 12gev_solid
Exit from the shell when done; that will return you to where you were before issuing the newgrp command.
old simulation software (for record keeping only, don't use it!)
- SoLID GEMC 1.x (simulation in GEANT4 with GEMC 1.x, used for later proposals and pCDR in 2014)
- SoLID Comgeant (simulation in GEANT3 used for the early proposals)