User Tools

Site Tools


nnm:cloud_computing

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
nnm:cloud_computing [2024/01/03 16:15]
stefan.birner [Option A: HTCondor]
— (current)
Line 1: Line 1:
-======nextnano.cloud====== 
- 
-==== Screenshot ==== 
-The following shows a screenshot. 6 computers are connected to the HTCondor pool called ''​e25nn''​. 
-120 slots are configured, 44 are currently available. 
-Computers 2, 3, 4 and 6 are selected to accept jobs. 
-Computers 2 and 6 are currently not available as they are in use. 
- 
-{{ :​nnm:​screenshot_htcondor.png?​direct&​600 |}} 
-==== Recommended Installation Process ==== 
-Download HTCondor installer from [[https://​research.cs.wisc.edu/​htcondor/​|HTCondor]]. 
-  - In the webpage, click on ''​Download''​ and go to ''​Current Stable Release''​ of ''​UW Madison''​ (as of September 24 2020, HTCondor 8.8.10). ​ 
-  - We recommend the file for Windows in ''​Native Packages''​. The filenames look similar to this one: 
-    * ''​condor-8.8.10-513586-Windows-x64.msi''​ (Version 8.8.10) 
-  - Select the file, agree to the license agreement and download the ''​.msi''​ file. When you download it, you can optionally enter your name, email address and institution and subscribe to the HTCondor newsletter. 
- 
-Install HTCondor. 
-  - Start installer 
-  - Click ''​Next''​ and then accept License Agreement 
-  - Then there are two options. There will be one special computer that manages all HTCondor jobs (Central Manager), and normal computers. If there is no Central Manager yet, we have to create a New Pool. 
-    - If you **are** on the Central Manager, choose ''​Create a new HTCondor Pool''​ and fill in the name of the Pool, e.g. ''​nextnanoHTCondorPool''​. This is a unique name for your pool of machines. 
-    - If you **are not** the Central Manager, choose ''​Join an existing HTCondor Pool''​ and fill in the hostname of the central manager, e.g. computername where ''​nextnanoHTCondorPool''​ has been created. 
-  - Tic ''​Submit jobs to HTCondorPool''​ and choose ''​Always run jobs and never suspend them.''​ (Alternative:​ If you do not want other people to run jobs on your machine at all, select ''​Do not run jobs on this machine''​ or  if you do not want other people to run jobs on your machine while you are working, select ''​When keyboard has been idle for 15 minutes.''​. You can of course modify these settings later.) 
-  - Fill in your domain name (Example: Your Windows domain, e.g. ''​yourcompanyname.com''​ (without ''​www''​).) All PCs of your network should get the same domain name, this does not necessarily have to be your Windows domain. 
-  - Hostname of SMTP Server and email address of administrator (not needed currently, leave it blank) 
-  - Path to Java Virtual Machine (not needed currently, leave it blank) 
-  - Host with Read access: ''​*''​ 
-  - Host with Write access: ''​$(CONDOR_HOST),​ $(IP_ADDRESS),​ *.yourdomainname.com,​ 192.168.178.*'',​ (**Replace** *.cs.wisc.edu with your domain name and **add** your local IP subnet e.g. 192.168.178.*). On Windows you can find your IP subnet by opening the Command Prompt ''​cmd.exe''​ and typing in ''​ipconfig''​. 
-  - Host with Administrator access ''​*''​ (or ''​$(IP_ADDRESS)''​) 
-  - Enable VM Universe ''​No''​ 
-  - Choose an installation directory and press next (e.g. ''​C:​\condor\''​). The directory ''​Program Files''​ is problematic due to write permissions,​ so we do not recommend using it. 
-  - Press ''​Install''​ and type in the Administrator password of your PC. (You need Administrator rights.) 
-  - Once installed, please restart the computer. Then your new pool or pool member should be up and running. 
- 
-A few more setups 
-  - To be able to submit jobs from nextnanomat to HTCondor, you have to store your credentials once. Open a command shell and type the following command: ''​condor_store_cred add''​ 
-    * Enter your password and you are ready to submit your first HTCondor job. 
-    * If this does not work, try to enter ''​condor_store_cred add -debug''​ for more output information on the error. 
-  - Please make sure that nextnanomat has successfully found the HTCondor pool. In nextnanomat go to ''​Tools''​ -> ''​Options''​ -> ''​Cloud computing''​. If everything is correctly set up, you will find the "​HTCondor"​ section highlighted with green color, and the available computers show up in "​Cluster"​. If this is not the case, maybe you have not installed HTCondor on the computer where you are running nextnanomat. Please also check that the HTCondor installation path is correctly set within nextnanomat,​ e.g. the default path ''​C:​\condor''​ might not be the one where you installed HTCondor. 
- 
-===Summary of settings (Example)=== 
-<​code>​ 
-Hostname (for HTCondor pool): computername.yourcompanyname.com 
-Policy: "​Always run jobs" 
-Accounting domain: yourcompanyname.com 
-Read access: * 
-Write access: $(CONDOR_HOST),​ $(IP_ADDRESS),​ *.yourcompanyname.com,​ 192.168.178.* 
-Administrator:​ $(IP_ADDRESS) 
-</​code>​ 
- 
-===Config file=== 
-You can find your HTCondor config settings in the file ''​C:​\condor\condor_config''​. 
-Let's look at an example below. 
- 
-  * Your company is called ''​Simpson''​. 
-  * Your Windows domain is called ''​simpson.com''​. 
-  * Your HTCondor pool shall have the name ''​TheSimpsonsCondorPool''​. 
-  * The HTCondor host that manages the HTCondor jobs has the computer name ''​homer.simpson.com''​. 
-  * Your computer is called ''​lisa.simpson.com''​. 
-  * The computers in your network have the IP range ''​192.168.188.*''​. (or ''​2001:​db8:​2042::​*''​ in IPv6) 
- 
-<​code>​ 
- ​RELEASE_DIR = C:\condor 
- ​LOCAL_CONFIG_FILE = $(LOCAL_DIR)\condor_config.local 
- ​REQUIRE_LOCAL_CONFIG_FILE = FALSE 
- ​LOCAL_CONFIG_DIR = $(LOCAL_DIR)\config 
- use SECURITY : HOST_BASED 
-#​CONDOR_HOST:​ $(FULL_HOSTNAME) ​         # on computer called homer 
- ​CONDOR_HOST:​ homer                      # on computer called lisa 
- ​COLLECTOR_NAME = TheSimpsonsCondorPool ​ # only on computer called homer 
-#UID_DOMAIN =                           # empty if you do not have a domain 
- ​UID_DOMAIN = simpson.com 
- ​SOFT_UID_DOMAIN=TRUE ​            # entry is missing if you do not have a domain 
- ​FILESYSTEM_DOMAIN = simpson.com ​ # entry is missing if you do not have a domain 
- ​CONDOR_ADMIN =  
- ​SMTP_SERVER =  
- ​ALLOW_READ = * 
- ​ALLOW_WRITE = $(CONDOR_HOST),​ $(IP_ADDRESS),​ *.simpson.com,​ 192.168.188.*,​ 2001:​db8:​2042::​* 
- ​ALLOW_ADMINISTRATOR = $(IP_ADDRESS) 
- use POLICY : ALWAYS_RUN_JOBS 
-#use POLICY : DESKTOP 
- ​WANT_VACATE = FALSE 
- ​WANT_SUSPEND = TRUE 
-#​DAEMON_LIST = MASTER SCHEDD COLLECTOR NEGOTIATOR STARTD # on computer called homer 
-#​DAEMON_LIST = MASTER SCHEDD STARTD KBDD                 # on computer called lisa if keyboard idle 15 minutes option was chosen 
- ​DAEMON_LIST = MASTER SCHEDD STARTD ​                      # on computer called lisa 
-</​code>​ 
- 
- 
-==== Submitting jobs to HTCondor pool with nextnanomat ==== 
-**Submit job** 
-  - Add a job to the Batch list in the **Run** tab. 
-  - Click on the **Run in HTCondor Cluster** button (button with triangle and network). 
- 
-**Show information on HTCondor cluster** 
-  - Click on **Show Additional Info for Cluster Simulation**. 
-  - Press the **Refresh** button on the right. 
-  - The results of the ''​condor_status''​ command are shown, i.e. the number of compute slots are displayed. 
-  - You can select another HTCondor command such as ''​condor_q''​ to show the status of your submitted jobs, i.e. select ''​condor_q'',​ and then press the **Refresh** button. 
-  * You can type in any command in the line **System command:**, e.g. ''​dir''​. 
-  * The button **Open Documentation** opens the online documentation (this website). 
- 
-**Results of HTCondor simulations** 
-  * Once your HTCondor jobs are finished, the results are automatically copied back to your simulation output folder ''<​nextnano simulation output folder\<​name of input file>​\''​. 
-  * For debugging purposes regarding the HTCondor job, you can analyze the generated log file, ''<​input file name>​.log''​. 
- 
- 
-==== Useful HTCondor commands for the Command Prompt ==== 
-  * ''​condor_submit <​filename>​.sub''​ Submit a job to the pool. 
-  * ''​condor_q''​ Shows current state of own jobs in the queue. 
-    * ''​condor_q -nobatch -global -allusers''​ Shows state of all jobs in the cluster. Of all users. 
-    * ''​condor_q -goodput -global -allusers''​ Shows state and occupied CPU of all jobs in the cluster. 
-    * ''​condor_q -allusers -global -analyze''​ Detailed information for every job in the cluster. 
-    * ''​condor_q -global -allusers -hold''​ Shows why jobs are in hold state. 
-  * ''​condor_status''​ Shows state of all available resources. 
-  * ''​condor_status -long''​ Shows state of all available resources and many other information. 
-  * ''​condor_status -debug''​ Shows state of all available resources and some additional information,​ e.g. //WARNING: Saw slow DNS query, which may impact entire system: getaddrinfo(<​Computername>​) took 11.083566 seconds.// 
-  * ''​condor_rm''​ Remove jobs from a queue: 
-    * ''​condor_rm -all''​ Removes all jobs from a queue. 
-    * ''​condor_rm <​cluster>​.<​id>''​ Removes jobs on cluster <​cluster>​ with id <id> (It seems ''<​cluster>​.''​ can be omitted, and ''​id''​ is the ''​JOB_IDS''​ number.) 
-  * ''​condor_release -all''​ If any jobs are in state hold, use this command to restart them. 
-  * ''​condor_restart''​ Restart all HTCondor daemons/​services after changes in config file. 
-  * ''​condor_version''​ Returns the version number of HTCondor  ​ 
-  * ''​condor_store_cred query''​ Returns info about the credentials stored for HTCondor jobs 
-  * ''​condor_history''​ Lists the recently submitted jobs. If for a specific job ''​ID''​ the status has the value ''​ST''​=''​C'',​ then this job has been completed (''​C''​) successfully. 
-  * ''​condor_status -master'':​ returns Name, HTCondor Version, CPU and Memory of central manager 
-  * Open Command Prompt ''​cmd.exe''​ as Administrator. Type in: ''​net start condor''​. This has the same effect as restarting your computer, i.e. the networking service ''​condor''​ is started. This is useful if you have changed your local ''​condor_config''​ file. 
- 
-==== Configuration options for the Central Manager computer ==== 
-With this option in the ''​condor.config''​ file on the central manager, one can set a policy that the jobs are spread out over several machines rather than filling all slots of one computer before filling the slots of the other computers. 
-<​code>​ 
-##------nn: SPREAD JOBS BREADTH-FIRST OVER SERVERS 
-##-- Jobs are "​spread out" as much as possible, 
-##   so that each machine is running the fewest number of jobs. 
-NEGOTIATOR_PRE_JOB_RANK = isUndefined(RemoteOwner) * (- SlotId) 
-</​code>​ 
- 
-==== FAQ ==== 
-**Q**: I submitted a job to HTCondor, but nothing happens. nextnanomat says "​transmitted"​. 
- 
-**A**: It could be that nextnanomat does not have read in all required settings. You can try to type in the command line ''​condor_restart''​. Please make sure that you entered your credentials using ''​condor_store_cred add -debug''​. You should then start nextnanomat again. 
- 
-**Q**: I submitted a job to HTCondor, but the Batch line of nextnanomat is stuck with ''​preparing''​. What is wrong? 
- 
-**A1**: Did you store your credentials after the installation of HTCondor? If not, enter ''​condor_store_cred add''​ into the command prompt to add your password, see above (Recommended Installation Process). 
- 
-**A2**: Did you change your password recently? If yes you have to reenter your credentials for HTCondor. 
-Enter ''​condor_store_cred add''​ into the command prompt to add your password, see above (Recommended Installation Process). If this does not work, try to enter ''​condor_store_cred add -debug''​ for more output information on the error. 
- 
-**Q**: I specified target machines in Tools - Options. Afterwards every submitted job to HTCondor is stuck with ''​transmitting''​. What is wrong? 
- 
-**A**: The value for ''​UID_DOMAIN''​ within the condor_config file needs to be the same for every computer of your cluster. (You can easily test it in a command prompt with ''​condor_status -af uiddomain''​) If it's not the same value, no matching computer will be found and the job won't be transmitted successfully. 
- 
- 
-==== Problems with HTCondor ==== 
-=== Error: communication error === 
-If you receive the following error when you type in ''​condor_status''​ 
-<​code>​ 
-C:​\Users\"<​your user name>">​condor_status 
-Error: communication error 
-CEDAR:​6001:​Failed to connect to <​123.456.789.123>​ 
-</​code>​ 
-you can check whether the computer associated with this IP address is your HTCondor computer using the following command. 
-<​code>​ 
-nslookup 123.456.789.123 
-</​code>​ 
-It is also a good idea to type in 
-<​code>​ 
-nslookup 
-</​code>​ 
-This will return the name of the Default Server that resolves DNS names. 
-If it is not the expected computer, you can open a Command Prompt as **Administrator** and type in ''​ipconfig /​flushdns''​ to flush the DNS Resolver Cache. 
-<​code>​ 
-C:​\Users\"<​your user name>">​ipconfig /flushdns 
-</​code>​ 
-If the DNS address cannot be resolved correctly it could be related to a VPN connection that has configured a different default server for Domain Name to IP address mapping. 
-E.g. if your Windows Domain is called contoso.com (which is only visible within your own network and your own HTCondor pool) but your DNS is resolved to www.contoso.com (which might be outside your local HTCondor pool). 
- 
- 
-=== Error: ''​condor_store_cred add''​ failed with ''​Operation failed. Make sure your ALLOW_WRITE setting include this host.''​ === 
-Solution: 
-Edit ''​condor_config''​ file and add host, i.e. local computer name (here: nn-delta). 
-<​code>​ 
-    ALLOW_WRITE = $(CONDOR_HOST),​ $(IP_ADDRESS) 
-==> ALLOW_WRITE = $(CONDOR_HOST),​ $(IP_ADDRESS),​ nn-delta 
-</​code>​ 
- 
-=== Error? Check the Log files === 
-If you encounter any strange errors, you can find some hints in the history or Log files generated by HTCondor. 
-You can find them here: 
- 
-''​C:​\condor\spool''​ 
-  * history 
- 
-''​C:​\condor\log''​ 
-  * CollectorLog 
-  * MasterLog 
-  * MatchLog 
-  * NegotiatorLog 
-  * ProcLog 
-  * SchedLog 
-  * ShadowLog 
-  * SharedPortLog 
-  * StarterLog 
-  * StartLog 
-More details can be found here: [[https://​htcondor.readthedocs.io/​en/​v8_9_3/​misc-concepts/​logging.html|Logging in HTCondor]] 
- 
- 
  
nnm/cloud_computing.1704294933.txt.gz ยท Last modified: 2024/01/03 16:15 by stefan.birner