move ansible/install part to separate git repository

This commit is contained in:
Dmitry Ozerov
2023-08-31 13:17:26 +02:00
parent 489a2fcd2d
commit b2a9d2d9c1
77 changed files with 0 additions and 2382 deletions
-37
View File
@@ -1,37 +0,0 @@
To install broker node:
ansible-playbook --extra-vars "host=broker_test" install_broker_node.yml
to clean installation on broker node:
ansible-playbook --extra-vars "host=broker_test" clean_broker_node.yml
Following pre-defined configuration exists (to apply it:
ansible-playbook <machine>.<detector_configuration>.yml
Machine detector_configuration
daq9 JF15
daq4 JF02_JF06
daq4 JF06-4M
daq3 JF14
daq3
... ...
Example:
ansible-playbook daq4.JF02_JF06.yml
To switch machine to another configuration without replacing installed software:
ansible-playbook --extra-vars "host=sf_daq_alvra" clean_receiver_daq.config.yml
To clean completely machine from daq software:
ansible-playbook --extra-vars "host=sf_daq_alvra" clean_receiver_daq.all.yml
Some of the steps can be made separately, namely:
1. clean daq machine from not proper services:
ansible-playbook --extra-vars "host=sf_daq_maloja" clean_old_services.yml
2. install sf_daq_buffer on receiver machines
ansible-playbook --extra-vars "host=sf_daq_alvra" install_sf_daq_buffer.yml
3. install streamvis in receiver machines
ansible-playbook --extra-vars "host=sf_daq_alvra" install_streamvis.yml
-89
View File
@@ -1,89 +0,0 @@
# Configuration of sf-daq components via ansible
# Table of content
1. [Retrieval node](#retrieval)
2. [Receiver node](#receiver)
3. [Troubleshooting](#troubles)
<a id="retrieval"></a>
## Retrieval node
There are one production retrieval node and several development
| host | Comment |
| ------------- | ------------- |
| broker_test | Dev node |
| broker_production | Production node |
To install broker node:
```bash
ansible-playbook --extra-vars "host=broker_test" install_broker_node.yml
```
To clean broker node from daq services installation:
```bash
ansible-playbook --extra-vars "host=broker_test" clean_broker_node.yml
```
<a id="receiver"></a>
## Receiver node
For each beamline there is at least one main receiver node:
| host_id | Comment |
| ------------- | ------------- |
| sf_daq_alvra | Alvra |
| sf_daq_bernina | Bernina |
| sf_daq_maloja | Maloja |
To cleanup receiver node from daq configuration:
1. to cleanup just detector configuration, without cleaning up installed software
```bash
ansible-playbook --extra-vars "host={host_id}" clean_receiver_daq.config.yml
```
2. to cleanup completely reciever node (detector configuration and software):
```bash
ansible-playbook --extra-vars "host={host_id}" clean_receiver_daq.all.yml
```
To install (after fresh daq installation or after cleanup) corresponding configuration:
```bash
ansible-playbook {configuration}.yml
e.g.
ansible-playbook daq4.JF06.yml
```
where {configuration} is from this table:
| configuration | Comment |
| ------------- | ------------- |
| daq8.JF02_JF06-4M | Hamos+4M |
| daq8.JF02_JF06 | Hamos+16M(unstable) |
| daq8.JF06-4M_JF09_JF10 | Hamos + FLEX detectors |
| daq8.JF06-4M | 4M detector |
| daq8.JF06 | 16M detector |
| daq3.JF01_JF03_JF04_JF07 | 1p5M+I0+Fluo+16M |
| daq3.JF01_JF03_JF07-3m | 1p5M+I0+3modules_from16M |
| daq3.JF01_JF03_JF07_JF14 | 1p5M+I0+16M+RIXS |
| daq3.JF01_JF03_JF07 | 1p5M+I0+16M |
| daq3.JF01_JF03_JF13_JF14 | 1p5M+I0+Vacuum+RIXS |
| daq3.JF03_JF07_JF14 | I0+16M+RIXS |
| daq3.JF03_JF14 | I0+RIXS |
| daq3.JF14 | RIXS |
| daq9.JF15 | 1st Maloja Detector |
<a id="troubles"></a>
## Troubleshooting
### Problem with retrieval/broker node
Re-run broker node installation script
```bash
ansible-playbook --extra-vars "host=broker_production" install_broker_node.yml
```
if problem persist, clean broker node from services and re-run installation:
```bash
ansible-playbook --extra-vars "host=broker_production" clean_broker_node.yml
ansible-playbook --extra-vars "host=broker_production" install_broker_node.yml
```
### Problem/reconfiguration of reciever node
In case reconfiguration of reciever node is needed (let's say Bernina switches to use combination 1p5M+I0, while before used configuration RIXS-only) - first clean previous detector configuration; then apply suitable configuration from blessed:
```bash
ansible-playbook --extra-vars "host=sf_daq_bernina" clean_receiver_daq.config.yml
ansible-playbook daq3.JF01_JF03_JF07.yml
```
-6
View File
@@ -1,6 +0,0 @@
[defaults]
inventory=inventories/sf-daq
host_key_checking = False
-19
View File
@@ -1,19 +0,0 @@
- name: clean daq machine from all broker services
hosts: '{{ host }}'
become: true
tasks:
- name: stop, disable and remove services (leave software)
shell: |
systemctl stop sf-daq*
rm -rf /etc/systemd/system/multi-user.target.wants/sf-daq*
rm -rf /etc/systemd/system/sf-daq*
rm -rf /etc/cron.d/clean_buffer.cron
rm -rf /etc/systemd/system/sf.*.epics*
- name: clean broker machine from software
hosts: '{{ host }}'
become: true
tasks:
- name: remove daq software
shell: |
rm -rf /home/dbe/git /home/dbe/miniconda3 /home/dbe/service_scripts /home/dbe/bin /home/dbe/.condarc
-17
View File
@@ -1,17 +0,0 @@
- name: clean daq machine from all dap services
hosts: '{{ host }}'
become: true
tasks:
- name: stop, disable and remove services (leave software)
shell: |
systemctl stop JF*
rm -rf /etc/systemd/system/multi-user.target.wants/JF*
rm -rf /etc/systemd/system/JF*
- name: clean daq machine from software
hosts: '{{ host }}'
become: true
tasks:
- name: remove all scripts software
shell: |
rm -rf /home/dbe/git /home/dbe/miniconda3 /home/dbe/service_scripts /home/dbe/bin /home/dbe/.condarc
-14
View File
@@ -1,14 +0,0 @@
- name: clean daq machine from the old services
hosts: '{{ host }}'
become: true
tasks:
- name: stop, disable and remove services
shell: |
systemctl stop jungfrau*
rm -rf /etc/systemd/system/multi-user.target.wants/jungfrau*
rm -rf /etc/systemd/system/jungfrau*
rm -rf /home/dbe/git /home/dbe/miniconda3 /home/dbe/service_scripts /home/dbe/swissfel
@@ -1,9 +0,0 @@
- import_playbook: clean_receiver_daq.config.yml
- name: clean daq machine from daq software
hosts: '{{ host }}'
become: true
tasks:
- name: remove daq software
shell: |
rm -rf /home/dbe/git /home/dbe/miniconda3 /home/dbe/service_scripts /home/dbe/bin /home/dbe/.condarc
@@ -1,16 +0,0 @@
- name: clean daq machine from all receiver services
hosts: '{{ host }}'
become: true
tasks:
- name: stop, disable and remove services (leave software)
shell: |
systemctl stop JF*
rm -rf /etc/systemd/system/multi-user.target.wants/JF*
rm -rf /etc/systemd/system/JF*
rm -rf /etc/telegraf/telegraf.d/JF*
systemctl restart telegraf
rm -rf /home/dbe/service_scripts/JF*
-212
View File
@@ -1,212 +0,0 @@
- import_playbook: install_streamvis.yml
vars:
host: daq8
- import_playbook: dap_service.yml
vars:
host: daq8
detector: "{{ JF06_4M_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 11,12
dap_worker_cores: "0 1 2 3 4 5 6 7 8 9 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
dap_accumulator_cores: 10
dap_accumulator_host: sf-daq-8
dap_visualisation_host: sf-daq-8
- import_playbook: dap_service.yml
vars:
host: daq8
detector: "{{ JF07_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 11,12
dap_worker_cores: "0 1 2 3 4 5 6 7 8 9 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
dap_accumulator_cores: 10
dap_accumulator_host: sf-daq-8
dap_visualisation_host: sf-daq-8
- import_playbook: dap_service.yml
vars:
host: daq8
detector: "{{ JF17_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 11,12
dap_worker_cores: "0 1 2 3 4 5 6 7 8 9 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
dap_accumulator_cores: 10
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: install_streamvis.yml
vars:
host: daq7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF01_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 4
dap_worker_cores: "0 1 2"
dap_accumulator_cores: 3
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF03_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 5
dap_worker_cores: "6 7"
dap_accumulator_cores: 5
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF02_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 8
dap_worker_cores: "9 10 11 12 13 14 15 16"
dap_accumulator_cores: 8
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF04_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 8
dap_worker_cores: "9 10"
dap_accumulator_cores: 8
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF05_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 11
dap_worker_cores: "12 13"
dap_accumulator_cores: 11
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF16_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 30
dap_worker_cores: "27 28 29"
dap_accumulator_cores: 31
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: dap_service.yml
vars:
host: daq7
detector: "{{ JF17_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 14,15
dap_worker_cores: "17 18 19 20 21 22 23 24 25 26"
dap_accumulator_cores: 16
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: install_streamvis.yml
vars:
host: daq3
- import_playbook: dap_service.yml
vars:
host: daq3
detector: "{{ JF06_4M_detector_short_name }}"
dap_skip_frames: 1
visualisation_cores: 1,2
dap_worker_cores: "4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
dap_accumulator_cores: 3
dap_accumulator_host: sf-daq-8
dap_visualisation_host: sf-daq-8
#- import_playbook: dap_service.yml
# vars:
# host: daq3
# detector: "{{ JF06_detector_short_name }}"
# dap_skip_frames: 10
# visualisation_cores: 1
# dap_worker_cores: "1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
# dap_accumulator_cores: 1
# dap_accumulator_host: sf-daq-8
# dap_visualisation_host: sf-daq-8
- import_playbook: dap_service.yml
vars:
host: daq3
detector: "{{ JF07_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 1
dap_worker_cores: "1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
dap_accumulator_cores: 1
dap_accumulator_host: sf-daq-8
dap_visualisation_host: sf-daq-8
- import_playbook: dap_service.yml
vars:
host: daq3
detector: "{{ JF17_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 1
dap_worker_cores: "1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39"
dap_accumulator_cores: 1
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
- import_playbook: install_streamvis.yml
vars:
host: daq1
- import_playbook: dap_service.yml
vars:
host: daq1
detector: "{{ JF06_4M_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 10
dap_worker_cores: "0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55"
dap_accumulator_cores: 10
dap_accumulator_host: sf-daq-8
dap_visualisation_host: sf-daq-8
- import_playbook: dap_service.yml
vars:
host: daq1
detector: "{{ JF17_detector_short_name }}"
dap_skip_frames: 10
visualisation_cores: 10
dap_worker_cores: "0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55"
dap_accumulator_cores: 10
dap_accumulator_host: sf-daq-7
dap_visualisation_host: sf-daq-7
#- import_playbook: install_streamvis.yml
# vars:
# host: cn22
#- import_playbook: dap_service.yml
# vars:
# host: cn22
# detector: "{{ JF17_detector_short_name }}"
# dap_skip_frames: 10
# visualisation_cores: 10
# dap_worker_cores: "0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55"
# dap_accumulator_cores: 10
# dap_accumulator_host: sf-daq-7
# dap_visualisation_host: sf-daq-7
-39
View File
@@ -1,39 +0,0 @@
- name: install dap services
hosts: '{{ host }}'
become: true
vars:
detector_full_name: "{{ vars[detector + '_detector_full_name'] }}"
visualisation_view: "{{ vars[detector + '_visualisation_view'] }}"
visualisation_port: "{{ vars[detector + '_visualisation_port_dap'] }}"
visualisation_title: "{{ vars[detector + '_visualisation_title'] }}"
dap_data_stream: "{{ vars[detector + '_dap_data_stream'] }}"
dap_accumulator_port: "{{ vars[detector + '_dap_accumulator_port'] }}"
dap_to_visualisation: "{{ vars[detector + '_dap_to_visualisation'] }}"
dap_parameters_file: "/gpfs/photonics/swissfel/buffer/dap/config/pipeline_parameters.{{ detector_full_name }}.json"
tasks:
- name: install execution scripts
become_user: dbe
template: src={{item.src}} dest={{item.dest}}
with_items:
- { src: 'templates/dap_vis.sh', dest: '/home/dbe/service_scripts/{{ detector }}-dap_vis.sh' }
- { src: 'templates/dap_accumulator.sh', dest: '/home/dbe/service_scripts/{{ detector }}-dap_accumulator.sh' }
- { src: 'templates/dap_worker.sh', dest: '/home/dbe/service_scripts/{{ detector }}-dap_worker.sh' }
- name: install service files for all services
become_user: root
template: src={{item.src}} dest={{item.dest}}
with_items:
- { src: 'templates/dap_vis.service', dest: '/etc/systemd/system/{{ detector }}-dap_vis.service' }
- { src: 'templates/dap_worker.service', dest: '/etc/systemd/system/{{ detector }}-dap_worker@.service' }
- { src: 'templates/dap_workers.service', dest: '/etc/systemd/system/{{ detector }}-dap_workers.service' }
- { src: 'templates/dap_accumulator.service', dest: '/etc/systemd/system/{{ detector }}-dap_accumulator.service' }
- name: start dap services
systemd: state=started name={{item.name}} daemon_reload=yes
with_items:
- { name: '{{ detector }}-dap_vis' }
- { name: '{{ detector }}-dap_workers' }
- { name: '{{ detector }}-dap_accumulator' }
-49
View File
@@ -1,49 +0,0 @@
- import_playbook: install_sf_daq_buffer.yml
vars:
host: daq10
- import_playbook: install_streamvis.yml
vars:
host: daq10
- import_playbook: receiver_services_detector.yml
vars:
host: daq10
detector: "{{ JF16_detector_short_name }}"
detector_full_name: "{{ JF16_detector_full_name }}"
visualisation_view: "{{ JF16_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF16_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF16_visualisation_port }}"
visualisation_title: "{{ JF16_visualisation_title }}"
last_module_number: "{{ JF16_last_module_number }}"
initial_udp_port: "{{ JF16_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-cristallina
visualisation_cores: 24,25
stream2vis_cores: 34
cores_buffer_writer: 17 17 17
cores_udp_recv_receivers: 16 16 16
cores_assembler: 10
- import_playbook: receiver_services_detector.yml
vars:
host: daq10
detector: "{{ JF17_detector_short_name }}"
detector_full_name: "{{ JF17_detector_full_name }}"
visualisation_view: "{{ JF17_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF17_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF17_visualisation_port }}"
visualisation_title: "{{ JF17_visualisation_title }}"
last_module_number: "{{ JF17_last_module_number }}"
initial_udp_port: "{{ JF17_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-cristallina
visualisation_cores: 26,27
stream2vis_cores: 35
cores_buffer_writer: 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5
cores_udp_recv_receivers: 6 6 6 6 7 7 7 7 8 8 8 8 9 9 9 9
cores_assembler: 11
-132
View File
@@ -1,132 +0,0 @@
- import_playbook: install_sf_daq_buffer.yml
vars:
host: daq12
- import_playbook: install_streamvis.yml
vars:
host: daq12
- import_playbook: receiver_services_detector.yml
vars:
host: daq12
detector: "{{ JF01_detector_short_name }}"
detector_full_name: "{{ JF01_detector_full_name }}"
visualisation_view: "{{ JF01_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF01_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF01_visualisation_port }}"
visualisation_title: "{{ JF01_visualisation_title }}"
last_module_number: "{{ JF01_last_module_number }}"
initial_udp_port: "{{ JF01_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 1,2
stream2vis_cores: 3
cores_buffer_writer: 4 4 4
cores_udp_recv_receivers: 5 5 5
cores_assembler: 6
- import_playbook: receiver_services_detector.yml
vars:
host: daq12
detector: "{{ JF03_detector_short_name }}"
detector_full_name: "{{ JF03_detector_full_name }}"
visualisation_view: "{{ JF03_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF03_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF03_visualisation_port }}"
visualisation_title: "{{ JF03_visualisation_title }}"
last_module_number: "{{ JF03_last_module_number }}"
initial_udp_port: "{{ JF03_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 7
stream2vis_cores: 8
cores_buffer_writer: 9
cores_udp_recv_receivers: 10
cores_assembler: 11
- import_playbook: receiver_services_detector.yml
vars:
host: daq12
detector: "{{ JF04_detector_short_name }}"
detector_full_name: "{{ JF04_detector_full_name }}"
visualisation_view: "{{ JF04_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF04_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF04_visualisation_port }}"
visualisation_title: "{{ JF04_visualisation_title }}"
last_module_number: "{{ JF04_last_module_number }}"
initial_udp_port: "{{ JF04_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 12
stream2vis_cores: 13
cores_buffer_writer: 14
cores_udp_recv_receivers: 15
cores_assembler: 16
- import_playbook: receiver_services_detector.yml
vars:
host: daq12
detector: "{{ JF05_detector_short_name }}"
detector_full_name: "{{ JF05_detector_full_name }}"
visualisation_view: "{{ JF05_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF05_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF05_visualisation_port }}"
visualisation_title: "{{ JF05_visualisation_title }}"
last_module_number: "{{ JF05_last_module_number }}"
initial_udp_port: "{{ JF05_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 17
stream2vis_cores: 18
cores_buffer_writer: 19
cores_udp_recv_receivers: 20
cores_assembler: 21
- import_playbook: receiver_services_detector.yml
vars:
host: daq12
detector: "{{ JF02_detector_short_name }}"
detector_full_name: "{{ JF02_detector_full_name }}"
visualisation_view: "{{ JF02_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF02_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF02_visualisation_port }}"
visualisation_title: "{{ JF02_visualisation_title }}"
last_module_number: "{{ JF02_last_module_number }}"
initial_udp_port: "{{ JF02_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-alvra
visualisation_cores: 29,30
stream2vis_cores: 31
cores_buffer_writer: 32 32 32 33 33 33 34 34 34
cores_udp_recv_receivers: 35 35 35 36 36 36 37 37 37
cores_assembler: 38
- import_playbook: receiver_services_detector.yml
vars:
host: daq12
detector: "{{ JF06_detector_short_name }}"
detector_full_name: "{{ JF06_detector_full_name }}"
visualisation_view: "{{ JF06_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF06_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF06_visualisation_port }}"
visualisation_title: "{{ JF06_visualisation_title }}"
last_module_number: "{{ JF06_last_module_number }}"
initial_udp_port: "{{ JF06_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-alvra
visualisation_cores: 39,40
stream2vis_cores: 41
cores_buffer_writer: 42 42 42 42 42 42 42 42 43 43 43 43 43 43 43 43 44 44 44 44 44 44 44 44 45 45 45 45 45 45 45 45
cores_udp_recv_receivers: 46 46 46 46 46 46 46 46 47 47 47 47 47 47 47 47 48 48 48 48 48 48 48 48 49 49 49 49 49 49 49 49
cores_assembler: 50
-169
View File
@@ -1,169 +0,0 @@
- import_playbook: install_sf_daq_buffer.yml
vars:
host: daq13
- import_playbook: install_streamvis.yml
vars:
host: daq13
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF08_detector_short_name }}"
detector_full_name: "{{ JF08_detector_full_name }}"
visualisation_view: "{{ JF08_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF08_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF08_visualisation_port }}"
visualisation_title: "{{ JF08_visualisation_title }}"
last_module_number: "{{ JF08_last_module_number }}"
initial_udp_port: "{{ JF08_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-alvra
visualisation_cores: 1
stream2vis_cores: 2
cores_buffer_writer: 3
cores_udp_recv_receivers: 4
cores_assembler: 5
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF09_detector_short_name }}"
detector_full_name: "{{ JF09_detector_full_name }}"
visualisation_view: "{{ JF09_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF09_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF09_visualisation_port }}"
visualisation_title: "{{ JF09_visualisation_title }}"
last_module_number: "{{ JF09_last_module_number }}"
initial_udp_port: "{{ JF09_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-alvra
visualisation_cores: 6
stream2vis_cores: 7
cores_buffer_writer: 8
cores_udp_recv_receivers: 9
cores_assembler: 10
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF10_detector_short_name }}"
detector_full_name: "{{ JF10_detector_full_name }}"
visualisation_view: "{{ JF10_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF10_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF10_visualisation_port }}"
visualisation_title: "{{ JF10_visualisation_title }}"
last_module_number: "{{ JF10_last_module_number }}"
initial_udp_port: "{{ JF10_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-alvra
visualisation_cores: 11
stream2vis_cores: 12
cores_buffer_writer: 13
cores_udp_recv_receivers: 14
cores_assembler: 15
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF13_detector_short_name }}"
detector_full_name: "{{ JF13_detector_full_name }}"
visualisation_view: "{{ JF13_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF13_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF13_visualisation_port }}"
visualisation_title: "{{ JF13_visualisation_title }}"
last_module_number: "{{ JF13_last_module_number }}"
initial_udp_port: "{{ JF13_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 16
stream2vis_cores: 17
cores_buffer_writer: 18
cores_udp_recv_receivers: 19
cores_assembler: 20
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF14_detector_short_name }}"
detector_full_name: "{{ JF14_detector_full_name }}"
visualisation_view: "{{ JF14_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF14_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF14_visualisation_port }}"
visualisation_title: "{{ JF14_visualisation_title }}"
last_module_number: "{{ JF14_last_module_number }}"
initial_udp_port: "{{ JF14_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 21
stream2vis_cores: 22
cores_buffer_writer: 23
cores_udp_recv_receivers: 24
cores_assembler: 25
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF11_detector_short_name }}"
detector_full_name: "{{ JF11_detector_full_name }}"
visualisation_view: "{{ JF11_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF11_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF11_visualisation_port }}"
visualisation_title: "{{ JF11_visualisation_title }}"
last_module_number: "{{ JF11_last_module_number }}"
initial_udp_port: "{{ JF11_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-txs
visualisation_cores: 26,27
stream2vis_cores: 28
cores_buffer_writer: 29 29 29 29
cores_udp_recv_receivers: 30 30 30 30
cores_assembler: 31
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF06_4M_detector_short_name }}"
detector_full_name: "{{ JF06_4M_detector_full_name }}"
visualisation_view: "{{ JF06_4M_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF06_4M_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF06_4M_visualisation_port }}"
visualisation_title: "{{ JF06_4M_visualisation_title }}"
last_module_number: "{{ JF06_4M_last_module_number }}"
initial_udp_port: "{{ JF06_4M_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-alvra
visualisation_cores: 32,33
stream2vis_cores: 34
cores_buffer_writer: 35 35 35 36 36 36 37 37
cores_udp_recv_receivers: 38 38 38 39 39 39 40 40
cores_assembler: 41
- import_playbook: receiver_services_detector.yml
vars:
host: daq13
detector: "{{ JF07_detector_short_name }}"
detector_full_name: "{{ JF07_detector_full_name }}"
visualisation_view: "{{ JF07_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF07_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF07_visualisation_port }}"
visualisation_title: "{{ JF07_visualisation_title }}"
last_module_number: "{{ JF07_last_module_number }}"
initial_udp_port: "{{ JF07_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-bernina
visualisation_cores: 42,43
stream2vis_cores: 44
cores_buffer_writer: 45 45 45 45 45 45 45 45 46 46 46 46 46 46 46 46 47 47 47 47 47 47 47 47 48 48 48 48 48 48 48 48
cores_udp_recv_receivers: 49 49 49 49 49 49 49 49 50 50 50 50 50 50 50 50 51 51 51 51 51 51 51 51 52 52 52 52 52 52 52 52
cores_assembler: 53
-28
View File
@@ -1,28 +0,0 @@
- import_playbook: install_sf_daq_buffer.yml
vars:
host: daq9
- import_playbook: install_streamvis.yml
vars:
host: daq9
- import_playbook: receiver_services_detector.yml
vars:
host: daq9
detector: "{{ JF15_detector_short_name }}"
detector_full_name: "{{ JF15_detector_full_name }}"
visualisation_view: "{{ JF15_visualisation_view }}"
visualisation_incoming_data_port: "{{ JF15_visualisation_incoming_data_port }}"
visualisation_port: "{{ JF15_visualisation_port }}"
visualisation_title: "{{ JF15_visualisation_title }}"
last_module_number: "{{ JF15_last_module_number }}"
initial_udp_port: "{{ JF15_initial_udp_port }}"
detector_config: "/gpfs/photonics/swissfel/buffer/config/{{ detector_full_name }}.json"
visualisation_alias: sf-daq-maloja
visualisation_cores: 15,16
stream2vis_cores: 14
cores_buffer_writer: 9 9 10 10 11 11 12 12
cores_udp_recv_receivers: 5 5 6 6 7 7 8 8
cores_assembler: 13
-189
View File
@@ -1,189 +0,0 @@
- name: install sf_daq_broker
hosts: '{{ host }}'
become: true
become_user: dbe
tasks:
- name: Create service directory
file:
path: /home/dbe/service_scripts
state: directory
- name: Create config directory
file:
path: /home/dbe/service_configs
state: directory
- name: install setup script
template:
src: templates/sf_daq_broker.setup.sh
dest: /home/dbe/service_scripts/sf_daq_broker.setup.sh
mode: '0755'
- name: execute setup script
shell: /home/dbe/service_scripts/sf_daq_broker.setup.sh
- name: install execution scripts
template: src={{item.src}} dest={{item.dest}}
with_items:
- { src: 'templates/sf-daq_broker.start.sh', dest: '/home/dbe/service_scripts/sf-daq_broker.start.sh' }
- { src: 'templates/sf-daq_broker_slow.start.sh', dest: '/home/dbe/service_scripts/sf-daq_broker_slow.start.sh' }
- { src: 'templates/sf-daq_writer.start.sh', dest: '/home/dbe/service_scripts/sf-daq_writer.start.sh' }
- { src: 'templates/sf-daq_detector_retrieve_writer.start.sh', dest: '/home/dbe/service_scripts/sf-daq_detector_retrieve_writer.start.sh' }
- { src: 'templates/sf-daq_detector_actions.start.sh', dest: '/home/dbe/service_scripts/sf-daq_detector_actions.start.sh' }
- name: install service files for all services
become_user: root
template: src={{item.src}} dest={{item.dest}}
with_items:
- { src: 'templates/sf-daq_broker.service', dest: '/etc/systemd/system/sf-daq_broker.service' }
- { src: 'templates/sf-daq_broker_slow.service', dest: '/etc/systemd/system/sf-daq_broker_slow.service' }
- { src: 'templates/sf-daq_writer@.service', dest: '/etc/systemd/system/sf-daq_writer@.service' }
- { src: 'templates/sf-daq_writers.service', dest: '/etc/systemd/system/sf-daq_writers.service' }
- { src: 'templates/sf-daq_detector_retrieve_writer@.service', dest: '/etc/systemd/system/sf-daq_detector_retrieve_writer@.service' }
- { src: 'templates/sf-daq_detector_retrieve_writers.service', dest: '/etc/systemd/system/sf-daq_detector_retrieve_writers.service' }
- { src: 'templates/sf-daq_detector_actions_worker@.service', dest: '/etc/systemd/system/sf-daq_detector_actions_worker@.service' }
- { src: 'templates/sf-daq_detector_actions_workers.service', dest: '/etc/systemd/system/sf-daq_detector_actions_workers.service' }
- import_playbook: install_sf_daq_buffer.yml
vars:
host: '{{ host }}'
- name: install cleanup service for the detector buffer
hosts: '{{ host }}'
become: true
become_user: dbe
tasks:
- name: install cleanup script
template:
src: templates/delete_old_files_in_buffer.sh
dest: /home/dbe/service_scripts/delete_old_files_in_buffer.sh
mode: '0755'
- name: install cleanup cron
become_user: root
template:
src: templates/clean_buffer.cron
dest: /etc/cron.d/clean_buffer.cron
- name: rabbitmq service
hosts: '{{ host }}'
become: true
become_user: dbe
tasks:
- name: install script
template:
src: templates/sf-msg-broker.start.sh
dest: /home/dbe/service_scripts/sf-msg-broker.start.sh
mode: '0755'
- name: start docker service
become_user: root
systemd:
state=started
enabled=yes
name=docker
- name: execute setup script
become_user: root
shell: /home/dbe/service_scripts/sf-msg-broker.start.sh
- name: start broker services
hosts: '{{ host }}'
become: true
tasks:
- name: start sf-daq_broker service
systemd:
state=started
name=sf-daq_broker.service
daemon_reload=yes
- name: start sf-daq_broker_slow service
systemd:
state=started
name=sf-daq_broker_slow.service
daemon_reload=yes
- name: start sf-daq_writers service
systemd:
state=started
name=sf-daq_writers.service
daemon_reload=yes
- name: start sf-daq_detector_retrieve_writers service
systemd:
state=started
name=sf-daq_detector_retrieve_writers.service
daemon_reload=yes
- name: start sf-daq_detector_actions service
systemd:
state=started
name=sf-daq_detector_actions_workers.service
daemon_reload=yes
- name: start crond service
systemd:
state=started
enabled=yes
name=crond
daemon_reload=yes
- name: install epics buffer,writer and validator services
hosts: '{{ host }}'
become: true
tasks:
- name: install systemd file for epics buffer service
template:
src: templates/sf.epics_buffer.service
dest: /etc/systemd/system/sf.{{ item.beamline_name }}.epics_buffer.service
loop: "{{ epics_buffer_settings }}"
- name: install systemd file for epics writer service
template:
src: templates/sf.epics_writer.service
dest: /etc/systemd/system/sf.{{ item.beamline_name }}.epics_writer.service
loop: "{{ epics_buffer_settings }}"
- name: install systemd file for epics validator service
template:
src: templates/sf.epics_validator.service
dest: /etc/systemd/system/sf.{{ item.beamline_name }}.epics_validator.service
loop: "{{ epics_buffer_settings }}"
- name: install start file for epics buffer service
template:
src: templates/sf.epics_buffer.sh
dest: /home/dbe/service_scripts/sf.{{ item.beamline_name }}.epics_buffer.sh
mode: '0755'
owner: 'dbe'
loop: "{{ epics_buffer_settings }}"
- name: install start file for epics writer service
template:
src: templates/sf.epics_writer.sh
dest: /home/dbe/service_scripts/sf.{{ item.beamline_name }}.epics_writer.sh
mode: '0755'
owner: 'dbe'
loop: "{{ epics_buffer_settings }}"
- name: install start file for epics validator service
template:
src: templates/sf.epics_validator.sh
dest: /home/dbe/service_scripts/sf.{{ item.beamline_name }}.epics_validator.sh
mode: '0755'
owner: 'dbe'
loop: "{{ epics_buffer_settings }}"
- name: install script to start redis
become_user: dbe
template:
src: templates/sf-redis.start.sh
dest: /home/dbe/service_scripts/sf-redis.start.sh
mode: '0755'
owner: 'dbe'
- name: start epics services
hosts: '{{ host }}'
become: true
tasks:
- name: start epics buffer services
systemd:
state=started
name=sf.{{ item.beamline_name }}.epics_buffer.service
daemon_reload=yes
loop: "{{ epics_buffer_settings }}"
- name: start epics writer services
systemd:
state=started
name=sf.{{ item.beamline_name }}.epics_writer.service
daemon_reload=yes
loop: "{{ epics_buffer_settings }}"
- name: start epics validator services
systemd:
state=started
name=sf.{{ item.beamline_name }}.epics_validator.service
daemon_reload=yes
loop: "{{ epics_buffer_settings }}"
-55
View File
@@ -1,55 +0,0 @@
- name: install sf_daq_buffer
hosts: '{{ host }}'
become: true
become_user: dbe
tasks:
- name: Create service directory
file:
path: /home/dbe/service_scripts
state: directory
- name: install setup script
template:
src: templates/sf_daq_buffer.setup.new.sh
dest: /home/dbe/service_scripts/sf_daq_buffer.setup.sh
mode: '0755'
- name: execute setup script
shell: /home/dbe/service_scripts/sf_daq_buffer.setup.sh
- name: Create bin directory
file:
path: /home/dbe/bin
state: directory
- name: Create links to daq_buffer executables
file:
src: '/home/dbe/git/sf_daq_buffer/build/{{ item.src }}'
dest: '/home/dbe/bin/{{ item.dest }}'
state: link
loop:
- { src: jf_udp_recv, dest: jf_udp_recv }
- { src: jf_buffer_writer, dest: jf_buffer_writer }
- { src: sf_stream, dest: sf_stream }
- { src: jf_assembler, dest: jf_assembler }
- { src: sf_writer, dest: sf_writer }
- name: install check for changes in configuration file script
template:
src: templates/check_config_changed.sh
dest: /home/dbe/service_scripts/check_config_changed.sh
mode: '0755'
- name: script for proper zeromq version installation
template:
src: templates/install_zeromq.sh
dest: /home/dbe/service_scripts/install_zeromq.sh
mode: '0755'
- name: install sudoers rule for telegraph
become_user: root
template:
src: templates/detector_monitor.sudoers
dest: /etc/sudoers.d/detector_monitor
- name: install proper zeromq version
become_user: root
shell: /home/dbe/service_scripts/install_zeromq.sh
-18
View File
@@ -1,18 +0,0 @@
- name: install streamvis
hosts: '{{ host }}'
become: true
become_user: dbe
vars:
streamvis_version: 1.12.0
tasks:
- name: Create service directory
file:
path: /home/dbe/service_scripts
state: directory
- name: install setup script
template:
src: templates/streamvis_setup.sh
dest: /home/dbe/service_scripts/streamvis_setup.sh
mode: '0755'
- name: execute setup script
shell: /home/dbe/service_scripts/streamvis_setup.sh
-298
View File
@@ -1,298 +0,0 @@
[all]
sf-daq-1.psi.ch
sf-daq-3.psi.ch
sf-daq-7.psi.ch
sf-daq-8.psi.ch
sf-daq-9.psi.ch
sf-daq-10.psi.ch
sf-daq-11.psi.ch
sf-daq-12.psi.ch
sf-daq-13.psi.ch
[broker_test]
sf-daq-1.psi.ch
[broker_test:vars]
minutes_cleanup=40
number_of_writers=8
number_of_detector_retrieve_writers=10
detector_retrieve_cores="9,10,11,12,13,14,15,16,17,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18"
number_of_detector_actions_workers=4
detector_actions_cores="9,10,11,12,13,14"
epics_buffer_container_version="1.2.4"
epics_buffer_settings=[ { 'beamline_name': 'alvra', 'buffer_cores': 35, 'writer_cores': '34', 'redis_port': 6000 } ]
[broker_production]
sf-daq-11.psi.ch
[broker_production:vars]
minutes_cleanup=20
number_of_writers=10
number_of_detector_retrieve_writers=10
detector_retrieve_cores="11,12,13,14,15,16,17,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,44,43,42,41,40,39,38,37,36,45"
number_of_detector_actions_workers=4
detector_actions_cores="51,52,53,54"
epics_buffer_container_version="1.3.5"
epics_buffer_log_level="DEBUG"
epics_buffer_settings=[ { 'beamline_name': 'alvra', 'buffer_cores': 46, 'writer_cores': '2,3,4,5', 'validator_cores': '6,7', 'redis_port': 6001, 'beamline_storage': 'swissfel'}, { 'beamline_name': 'bernina', 'buffer_cores': 47, 'writer_cores': '2,3,4,5', 'validator_cores': '6,7', 'redis_port': 6002, 'beamline_storage': 'swissfel'}, { 'beamline_name': 'cristallina', 'buffer_cores': 48, 'writer_cores': '2,3,4,5', 'validator_cores': '6,7', 'redis_port': 6003, 'beamline_storage': 'swissfel'}, { 'beamline_name': 'furka', 'buffer_cores': 49, 'writer_cores': '2,3,4,5', 'validator_cores': '6,7', 'redis_port': 6004, 'beamline_storage': 'swissfel_athos'}, { 'beamline_name': 'maloja', 'buffer_cores': 50, 'writer_cores': '2,3,4,5', 'validator_cores': '6,7', 'redis_port': 6005, 'beamline_storage': 'swissfel_athos'} ]
[daq1]
sf-daq-1.psi.ch
[daq3]
sf-daq-3.psi.ch
[daq8]
sf-daq-8.psi.ch
[daq7]
sf-daq-7.psi.ch
[daq9]
sf-daq-9.psi.ch
[daq10]
sf-daq-10.psi.ch
[daq12]
sf-daq-12.psi.ch
[daq13]
sf-daq-13.psi.ch
[cn22]
sf-cn-22.psi.ch
[sf_daq_receivers:children]
daq1
daq3
daq8
daq7
daq9
daq10
daq12
daq13
cn22
[sf_daq_receivers:vars]
#JF01
JF01_detector_short_name=JF01
JF01_detector_full_name=JF01T03V01
JF01_visualisation_incoming_data_port=9001
JF01_visualisation_port=5001
JF01_visualisation_view=bernina
JF01_visualisation_title=1p5M
JF01_last_module_number=02
JF01_initial_udp_port=50010
JF01_dap_data_stream=tcp://192.168.30.38:9101
JF01_dap_accumulator_port=13001
JF01_dap_to_visualisation=12001
JF01_visualisation_port_dap=5101
#JF02
JF02_detector_short_name=JF02
JF02_detector_full_name=JF02T09V03
JF02_visualisation_incoming_data_port=9002
JF02_visualisation_port=5002
JF02_visualisation_view=von_hamos
JF02_visualisation_title=4p5M_Alvra
JF02_last_module_number=08
JF02_initial_udp_port=50020
JF02_dap_data_stream=tcp://192.168.30.38:9102
JF02_dap_accumulator_port=13002
JF02_dap_to_visualisation=12002
JF02_visualisation_port_dap=5102
#JF03
JF03_detector_short_name=JF03
JF03_detector_full_name=JF03T01V02
JF03_visualisation_incoming_data_port=9003
JF03_visualisation_port=5003
JF03_visualisation_view=module
JF03_visualisation_title=I0
JF03_last_module_number=00
JF03_initial_udp_port=50030
JF03_dap_data_stream=tcp://192.168.30.38:9103
JF03_dap_accumulator_port=13003
JF03_dap_to_visualisation=12003
JF03_visualisation_port_dap=5103
#JF04
JF04_detector_short_name=JF04
JF04_detector_full_name=JF04T01V01
JF04_visualisation_incoming_data_port=9004
JF04_visualisation_port=5004
JF04_visualisation_view=module
JF04_visualisation_title=Fluorescence
JF04_last_module_number=00
JF04_initial_udp_port=50040
JF04_dap_data_stream=tcp://192.168.30.38:9104
JF04_dap_accumulator_port=13004
JF04_dap_to_visualisation=12004
JF04_visualisation_port_dap=5104
#JF05
JF05_detector_short_name=JF05
JF05_detector_full_name=JF05T01V01
JF05_visualisation_incoming_data_port=9005
JF05_visualisation_port=5005
JF05_visualisation_view=module
JF05_visualisation_title=stripsel
JF05_last_module_number=00
JF05_initial_udp_port=50050
JF05_dap_data_stream=tcp://192.168.30.38:9105
JF05_dap_accumulator_port=13005
JF05_dap_to_visualisation=12005
JF05_visualisation_port_dap=5105
#JF06
JF06_detector_short_name=JF06
JF06_detector_full_name=JF06T32V04
JF06_visualisation_incoming_data_port=9006
JF06_visualisation_port=5006
JF06_visualisation_view=large
JF06_visualisation_title=16M_Jungfrau_Alvra
JF06_last_module_number=31
JF06_initial_udp_port=50060
JF06_dap_data_stream=tcp://192.168.30.38:9106
JF06_dap_accumulator_port=13006
JF06_dap_to_visualisation=12006
JF06_visualisation_port_dap=5106
#JF06_4M
JF06_4M_detector_short_name=JF06_4M
JF06_4M_detector_full_name=JF06T08V04
JF06_4M_visualisation_incoming_data_port=9006
JF06_4M_visualisation_port=5006
JF06_4M_visualisation_view=large
JF06_4M_visualisation_title=16M_4M_center_Alvra
JF06_4M_last_module_number=7
JF06_4M_initial_udp_port=50060
JF06_4M_dap_data_stream=tcp://192.168.30.39:9106
JF06_4M_dap_accumulator_port=13006
JF06_4M_dap_to_visualisation=12006
JF06_4M_visualisation_port_dap=5106
#JF07
JF07_detector_short_name=JF07
JF07_detector_full_name=JF07T32V02
JF07_visualisation_incoming_data_port=9007
JF07_visualisation_port=5007
JF07_visualisation_view=large
JF07_visualisation_title=16M_Jungfrau_Bernina
JF07_last_module_number=31
JF07_initial_udp_port=50100
JF07_dap_data_stream=tcp://192.168.30.39:9107
JF07_dap_accumulator_port=13007
JF07_dap_to_visualisation=12007
JF07_visualisation_port_dap=5107
#JF07_3m
JF07_3m_detector_short_name=JF07
JF07_3m_detector_full_name=JF07T03V01
JF07_3m_visualisation_incoming_data_port=9007
JF07_3m_visualisation_port=5007
JF07_3m_visualisation_view=bernina
JF07_3m_visualisation_title=16M_3m_Jungfrau_Bernina
JF07_3m_last_module_number=02
JF07_3m_initial_udp_port=50100
#JF08
JF08_detector_short_name=JF08
JF08_detector_full_name=JF08T01V01
JF08_visualisation_incoming_data_port=9008
JF08_visualisation_port=5008
JF08_visualisation_view=square
JF08_visualisation_title=Visual
JF08_last_module_number=00
JF08_initial_udp_port=50140
#JF09
JF09_detector_short_name=JF09
JF09_detector_full_name=JF09T01V01
JF09_visualisation_incoming_data_port=9009
JF09_visualisation_port=5009
JF09_visualisation_view=square
JF09_visualisation_title=FLEX1
JF09_last_module_number=00
JF09_initial_udp_port=50150
#JF10
JF10_detector_short_name=JF10
JF10_detector_full_name=JF10T01V01
JF10_visualisation_incoming_data_port=9010
JF10_visualisation_port=5010
JF10_visualisation_view=square
JF10_visualisation_title=FLEX2
JF10_last_module_number=00
JF10_initial_udp_port=50160
#JF11
JF11_detector_short_name=JF11
JF11_detector_full_name=JF11T04V01
JF11_visualisation_incoming_data_port=9011
JF11_visualisation_port=5011
JF11_visualisation_view=square
JF11_visualisation_title=TXS1
JF11_last_module_number=03
JF11_initial_udp_port=50170
#JF13
JF13_detector_short_name=JF13
JF13_detector_full_name=JF13T01V01
JF13_visualisation_incoming_data_port=9013
JF13_visualisation_port=5013
JF13_visualisation_view=square
JF13_visualisation_title=vacuum
JF13_last_module_number=00
JF13_initial_udp_port=50190
#JF14
JF14_detector_short_name=JF14
JF14_detector_full_name=JF14T01V01
JF14_visualisation_incoming_data_port=9014
JF14_visualisation_port=5014
JF14_visualisation_view=square
JF14_visualisation_title=RIXS
JF14_last_module_number=00
JF14_initial_udp_port=50191
#JF15
JF15_detector_short_name=JF15
JF15_detector_full_name=JF15T08V01
JF15_visualisation_incoming_data_port=9015
JF15_visualisation_port=5015
JF15_visualisation_view=square
JF15_visualisation_title=Maloja
JF15_last_module_number=07
JF15_initial_udp_port=50192
#JF16
JF16_detector_short_name=JF16
JF16_detector_full_name=JF16T03V01
JF16_visualisation_incoming_data_port=9016
JF16_visualisation_port=5016
JF16_visualisation_view=bernina
JF16_visualisation_title=Cristallina-Q1
JF16_last_module_number=02
JF16_initial_udp_port=50200
JF16_dap_data_stream=tcp://192.168.30.11:9116
JF16_dap_accumulator_port=13016
JF16_dap_to_visualisation=12016
JF16_visualisation_port_dap=5116
#JF17
JF17_detector_short_name=JF17
JF17_detector_full_name=JF17T16V01
JF17_visualisation_incoming_data_port=9017
JF17_visualisation_port=5017
JF17_visualisation_view=large
JF17_visualisation_title=Cristallina-MX
JF17_last_module_number=15
JF17_initial_udp_port=50203
JF17_dap_data_stream=tcp://192.168.30.11:9117
JF17_dap_accumulator_port=13017
JF17_dap_to_visualisation=12017
JF17_visualisation_port_dap=5117
@@ -1,59 +0,0 @@
- name: install receiver services
hosts: '{{ host }}'
become: true
handlers:
- name: restart_telegraf
service:
name: telegraf
state: restarted
tasks:
- name: install execution scripts
become_user: dbe
template: src={{item.src}} dest={{item.dest}}
with_items:
- { src: 'templates/streamvis.sh', dest: '/home/dbe/service_scripts/{{ detector }}-vis.sh' }
- { src: 'templates/stream2vis.sh', dest: '/home/dbe/service_scripts/{{ detector }}-stream2vis.sh' }
- { src: 'templates/buffer_writer-worker.sh', dest: '/home/dbe/service_scripts/{{ detector }}-buffer_writer-worker.sh' }
- { src: 'templates/udp_recv-worker.sh', dest: '/home/dbe/service_scripts/{{ detector }}-udp_recv-worker.sh' }
- { src: 'templates/assembler.sh', dest: '/home/dbe/service_scripts/{{ detector }}-assembler.sh' }
- name: install service files for all services
become_user: root
template: src={{item.src}} dest={{item.dest}}
with_items:
- { src: 'templates/udp_recv.service', dest: '/etc/systemd/system/{{ detector }}-udp_recv.service' }
- { src: 'templates/udp_recv-worker.service', dest: '/etc/systemd/system/{{ detector }}-udp_recv-worker@.service' }
- { src: 'templates/buffer_writer.service', dest: '/etc/systemd/system/{{ detector }}-buffer_writer.service' }
- { src: 'templates/buffer_writer-worker.service', dest: '/etc/systemd/system/{{ detector }}-buffer_writer-worker@.service' }
- { src: 'templates/stream2vis.service', dest: '/etc/systemd/system/{{ detector }}-stream2vis.service' }
- { src: 'templates/assembler.service', dest: '/etc/systemd/system/{{ detector }}-assembler.service' }
- { src: 'templates/streamvis.service', dest: '/etc/systemd/system/{{ detector }}-vis.service' }
- name: start detector services
systemd: state=started name={{item.name}} daemon_reload=yes
with_items:
- { name: '{{ detector }}-udp_recv' }
- { name: '{{ detector }}-buffer_writer' }
- { name: '{{ detector }}-stream2vis' }
- { name: '{{ detector }}-assembler' }
- { name: '{{ detector }}-vis' }
- name: telegraph feeding script
become_user: root
template:
src: templates/telegraph_feed.sh
dest: /usr/local/bin/telegraph_feed.sh
mode: '0755'
- name: telegraph configuration
become_user: root
template:
src: templates/telegraph_detector.conf
dest: /etc/telegraf/telegraf.d/{{ detector }}_daq.conf
notify:
- restart_telegraf
@@ -1,6 +0,0 @@
#!/bin/bash
for i in daq9 daq10 daq12 daq13
do
ansible-playbook $i.yml
done
@@ -1,13 +0,0 @@
[Unit]
Description=assembler: {{ detector }}
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/{{ detector }}-assembler.sh
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
-10
View File
@@ -1,10 +0,0 @@
#!/bin/bash
coreAssociated="{{ cores_assembler }}"
CONFIG={{ detector_config }}
#SERVICE={{ detector }}-assembler
#/home/dbe/service_scripts/check_config_changed.sh ${CONFIG} ${SERVICE} &
taskset -c ${coreAssociated} /home/dbe/bin/jf_assembler ${CONFIG}
@@ -1,17 +0,0 @@
[Unit]
Description={{ detector }} UDP2buffer worker instance as a service, instance %i
Requires={{ detector }}-buffer.service
Before={{ detector }}-buffer.service
BindsTo={{ detector }}-buffer.service
[Service]
PermissionsStartOnly=true
Type=idle
User=root
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-buffer-worker.sh %i
TimeoutStartSec=10
RestartSec=1
[Install]
WantedBy={{ detector }}-buffer.service
@@ -1,18 +0,0 @@
#!/bin/bash
if [ $# != 1 ]
then
systemctl start {{ detector }}-buffer-worker@{00..{{ last_module_number}}}
exit
fi
M=$1
coreAssociatedBuffer=({{ cores_receivers }})
initialUDPport={{ initial_udp_port }}
port=$((${initialUDPport}+10#${M}))
DETECTOR={{ detector_full_name }}
taskset -c ${coreAssociatedBuffer[10#${M}]} /home/dbe/bin/sf_buffer ${DETECTOR} M${M} ${port} /gpfs/photonics/swissfel/buffer/${DETECTOR} ${M}
-11
View File
@@ -1,11 +0,0 @@
[Unit]
Description=All UDP-buffer instances of {{ detector }}
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-buffer-worker.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -1,20 +0,0 @@
[Unit]
Description={{ detector }} UDP receiver worker instance as a service, instance %i
Requires={{ detector }}-buffer_writer.service
Before={{ detector }}-buffer_writer.service
BindsTo={{ detector }}-buffer_writer.service
[Service]
PermissionsStartOnly=true
Type=idle
User=root
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-buffer_writer-worker.sh %i
TimeoutStartSec=10
RestartSec=5
StartLimitInterval=0
Restart=on-failure
[Install]
WantedBy={{ detector }}-buffer_writer.service
@@ -1,14 +0,0 @@
#!/bin/bash
if [ $# != 1 ]
then
systemctl start {{ detector }}-buffer_writer-worker@{00..{{ last_module_number}}}
exit
fi
M=$1
coreAssociatedBuffer=({{ cores_buffer_writer }})
taskset -c ${coreAssociatedBuffer[10#${M}]} /home/dbe/bin/jf_buffer_writer {{ detector_config }} ${M}
@@ -1,11 +0,0 @@
[Unit]
Description=All module writer instances of {{ detector }}
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-buffer_writer-worker.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -1,21 +0,0 @@
#!/bin/bash
F=$1
S=$2
t=`stat -c %y $F`
while true
do
sleep 5
t1=`stat -c %y $F`
if [ "${t1}" != "${t}" ]
then
echo $F changed
t=${t1}
systemctl restart ${S}
fi
done
@@ -1,3 +0,0 @@
#Check every hour if buffer is occupied for larger then 80% and remove all files older then 3 hours
{{ minutes_cleanup }} * * * * root /home/dbe/service_scripts/delete_old_files_in_buffer.sh 80 5
@@ -1,13 +0,0 @@
[Unit]
Description=dap accumulator for {{ detector }} detector
[Service]
User=root
TimeoutStartSec=2
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-dap_accumulator.sh
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,9 +0,0 @@
#!/bin/bash
source /sf/jungfrau/applications/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-dap
CORE={{ dap_accumulator_cores }}
taskset -c ${CORE} python /sf/jungfrau/applications/sf-dap/code/accumulator.py --accumulator {{ dap_accumulator_host }} --accumulator_port {{ dap_accumulator_port }}
-13
View File
@@ -1,13 +0,0 @@
[Unit]
Description=streamvis after dap: {{ detector }}
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/{{ detector }}-dap_vis.sh
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
-21
View File
@@ -1,21 +0,0 @@
#!/bin/bash
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate vis
PORT={{ visualisation_port }}
PORT_BACKEND={{ dap_to_visualisation }}
H=`echo ${HOSTNAME} | sed 's/.psi.ch//'`
CORES="{{ visualisation_cores }}"
taskset -c ${CORES} \
streamvis {{ visualisation_view }} \
--allow-websocket-origin=${H}:${PORT} --allow-websocket-origin=localhost:${PORT} \
--port=${PORT} \
--address tcp://*:${PORT_BACKEND} --connection-mode bind \
--buffer-size 100 \
--page-title "dap: {{ visualisation_title }}"
@@ -1,13 +0,0 @@
[Unit]
Description=dap worker for {{ detector }} detector
[Service]
User=root
ExecStart=/bin/bash ./home/dbe/service_scripts/{{ detector }}-dap_worker.sh %i
TimeoutStartSec=2
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
-29
View File
@@ -1,29 +0,0 @@
#!/bin/bash
CORES=({{ dap_worker_cores }})
NW={{ dap_worker_cores.split() | length }}
if [ $# != 1 ]
then
for i in `seq 1 ${NW}`
do
echo "Starting worker $i"
systemctl start {{ detector }}-dap_worker@$i
done
exit
fi
source /sf/jungfrau/applications/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-dap
CP=`expr $1 - 1`
if [ ${CP} -lt 0 ] || [ ${CP} -ge ${NW} ]
then
echo "Error, not prepared worker number $1 (${CP} position in array), number of workers: ${NW}"
exit
fi
CORE=${CORES[${CP}]}
taskset -c ${CORE} python /sf/jungfrau/applications/sf-dap/code/worker.py --backend {{ dap_data_stream }} --accumulator {{ dap_accumulator_host }} --accumulator_port {{ dap_accumulator_port }} --visualisation {{ dap_visualisation_host }} --visualisation_port {{ dap_to_visualisation }} --peakfinder_parameters {{ dap_parameters_file }} --skip_frames_rate {{ dap_skip_frames }}
@@ -1,11 +0,0 @@
[Unit]
Description=All dap workers for {{ detector }} detector
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-dap_worker.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -1,6 +0,0 @@
[[inputs.exec]]
# name_override = "daqbuffer"
timeout = "10s"
interval = "10s"
data_format = "influx"
commands = ["sudo /usr/local/bin/telegraph_feed.sh JF06T32V02"]
@@ -1,31 +0,0 @@
#!/bin/bash
hours=5
threshold=80
if [ $# -ge 1 ]
then
threshold=$1
fi
if [ $# -eq 2 ]
then
hours=$2
fi
df -h | grep BUFFER > /dev/null
if [ $? != 0 ]
then
# BUFFER is not present
exit
fi
occupancy=`df -h /gpfs/photonics/swissfel/buffer | grep BUFFER | awk '{print $5}' | sed 's/%//'`
if [ ${occupancy} -lt ${threshold} ]
then
# echo OK, not action
exit
fi
find /gpfs/photonics/swissfel/buffer/JF* -type f -mmin +$((${hours}*60)) -delete
find /gpfs/photonics/swissfel/buffer/JF*/M* -type d -mmin +$((${hours}*60)) -delete
@@ -1,2 +0,0 @@
Defaults:telegraf !requiretty
telegraf ALL=(root) NOPASSWD: /usr/local/bin/telegraph_feed.sh
@@ -1,16 +0,0 @@
#!/bin/bash
rpm -qa | grep zeromq-devel | grep zeromq-devel-4.3. > /dev/null
if [ $? != 0 ]
then
cd /etc/yum.repos.d/
if [ ! -f network:messaging:zeromq:release-stable.repo ]
then
wget https://download.opensuse.org/repositories/network:messaging:zeromq:release-stable/RHEL_7/network:messaging:zeromq:release-stable.repo
fi
yum remove -y zeromq openpgm libsodium-devel
yum install -y libsodium-devel zeromq-devel
fi
rpm -qa | grep zeromq-devel | grep zeromq-devel-4.3. > /dev/null
exit $?
@@ -1,12 +0,0 @@
[Unit]
Description=SF DAQ broker
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/sf-daq_broker.start.sh
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,7 +0,0 @@
#!/bin/bash
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-daq
taskset -c 0 python /home/dbe/git/sf_daq_broker/sf_daq_broker/broker.py
@@ -1,12 +0,0 @@
[Unit]
Description=SF DAQ broker (slow)
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/sf-daq_broker_slow.start.sh
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,9 +0,0 @@
#!/bin/bash
export EPICS_CA_ADDR_LIST=sf-daq-cagw.psi.ch:5062
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-daq
taskset -c 1 python /home/dbe/git/sf_daq_broker/sf_daq_broker/broker_manager_slow.py
@@ -1,20 +0,0 @@
#!/bin/bash
if [ $# != 1 ]
then
systemctl start sf-daq_detector_actions_worker@{01..{{ number_of_detector_actions_workers}}}
exit
fi
export EPICS_CA_ADDR_LIST=sf-daq-cagw.psi.ch:5062
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-daq
M=$1
C={{ detector_actions_cores }}
export OMP_NUM_THREADS=1
taskset -c $C python /home/dbe/git/sf_daq_broker/sf_daq_broker/writer/start.py --writer_id $M --writer_type 3
@@ -1,12 +0,0 @@
[Unit]
Description=SF DAQ detector actions(pedestal, power-on) worker
[Service]
User=root
ExecStart=/bin/bash ./home/dbe/service_scripts/sf-daq_detector_actions.start.sh %i
TimeoutStartSec=2
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,11 +0,0 @@
[Unit]
Description=All sf-daq detector actions workers
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/sf-daq_detector_actions.start.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -1,19 +0,0 @@
#!/bin/bash
if [ $# != 1 ]
then
systemctl start sf-daq_detector_retrieve_writer@{01..{{ number_of_detector_retrieve_writers}}}
exit
fi
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-daq
M=$1
C={{ detector_retrieve_cores }}
export OMP_NUM_THREADS=5
export NUMBA_NUM_THREADS=5
taskset -c $C python /home/dbe/git/sf_daq_broker/sf_daq_broker/writer/start.py --writer_id $M --writer_type 1
@@ -1,12 +0,0 @@
[Unit]
Description=SF DAQ detector writer
[Service]
User=root
ExecStart=/bin/bash ./home/dbe/service_scripts/sf-daq_detector_retrieve_writer.start.sh %i
TimeoutStartSec=2
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,11 +0,0 @@
[Unit]
Description=All sf-daq_writers detector writers (retrieve and conversion)
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/sf-daq_detector_retrieve_writer.start.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -1,17 +0,0 @@
#!/bin/bash
if [ $# != 1 ]
then
systemctl start sf-daq_writer@{01..{{ number_of_writers}}}
exit
fi
export EPICS_CA_ADDR_LIST=sf-daq-cagw.psi.ch:5062
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate sf-daq
M=$1
taskset -c $M python /home/dbe/git/sf_daq_broker/sf_daq_broker/writer/start.py --writer_id $M
@@ -1,12 +0,0 @@
[Unit]
Description=SF DAQ writer
[Service]
User=root
ExecStart=/bin/bash ./home/dbe/service_scripts/sf-daq_writer.start.sh %i
TimeoutStartSec=2
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,11 +0,0 @@
[Unit]
Description=All sf-daq_writers(BS,EPICS,CAMERAS) writers
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/sf-daq_writer.start.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -1,13 +0,0 @@
#!/bin/bash
docker ps -a | grep sf-msg-broker > /dev/null
if [ $? != 0 ]
then
docker run -d --hostname {{ansible_hostname}} --name sf-msg-broker --net=host rabbitmq:3-management
else
docker ps -a | grep sf-msg-broker | grep " Up " > /dev/null
if [ $? != 0 ]
then
docker start sf-msg-broker
fi
fi
@@ -1,24 +0,0 @@
#!/bin/bash
if [ $# != 2 ]
then
echo "Usage: $0 <beamline> <port>"
echo "example: $0 alvra 6000"
echo "Not enough input provided, exit"
exit
fi
BEAMLINE=$1
PORT=$2
docker ps -a | grep redis-${BEAMLINE} > /dev/null
if [ $? != 0 ]
then
docker run -d --rm --name redis-${BEAMLINE} --net=bridge -p ${PORT}:6379 redis redis-server --save ''
else
docker ps -a | grep redis-${BEAMLINE} | grep " Up " > /dev/null
if [ $? != 0 ]
then
docker start redis-${BEAMLINE}
fi
fi
@@ -1,14 +0,0 @@
[Unit]
Description={{ item.beamline_name }} Epics buffer
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/sf.{{ item.beamline_name }}.epics_buffer.sh
ExecStop=/bin/bash docker stop sf.{{ item.beamline_name }}.epics_buffer
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,36 +0,0 @@
#!/bin/bash
SERVICE_NAME=sf.{{ item.beamline_name }}.epics_buffer
SERVICE_VERSION={{ epics_buffer_container_version }}
REDIS_HOST=127.0.0.1
REDIS_PORT={{ item.redis_port }}
CONFIG=/home/dbe/service_configs/sf.{{ item.beamline_name }}.epics_buffer.json
# start redis docker, if it's not running
/home/dbe/service_scripts/sf-redis.start.sh {{ item.beamline_name }} ${REDIS_PORT}
if [ ! -f ${CONFIG} ]
then
cat <<EOF > ${CONFIG}
{
"pulse_id_pv": "SLAAR11-LTIM01-EVR0:RX-PULSEID",
"pv_list": []
}
EOF
fi
/home/dbe/service_scripts/check_config_changed.sh ${CONFIG} sf.{{ item.beamline_name }}.epics_buffer &
taskset -c {{ item.buffer_cores }} \
docker run --rm --net=host \
--name="${SERVICE_NAME}" \
-e SERVICE_NAME="${SERVICE_NAME}" \
-e REDIS_HOST="${REDIS_HOST}" \
-e REDIS_PORT="${REDIS_PORT}" \
-e EPICS_CA_ADDR_LIST="sf-daq-cagw.psi.ch:5062 saresa-cagw.psi.ch:5062 saresb-cagw.psi.ch:5062 saresc-cagw.psi.ch:5062 satese-cagw.psi.ch:5062 satesf-cagw.psi.ch:5062" \
-v "${CONFIG}":/std_daq_service/config.json \
docker.io/paulscherrerinstitute/std-daq-service:"${SERVICE_VERSION}" \
epics_buffer \
--log_level {{ epics_buffer_log_level }}
@@ -1,14 +0,0 @@
[Unit]
Description={{ item.beamline_name }} Epics validator
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/sf.{{ item.beamline_name }}.epics_validator.sh
ExecStop=/bin/bash docker stop sf.{{ item.beamline_name }}.epics_validator
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,23 +0,0 @@
#!/bin/bash
SERVICE_NAME=sf.{{ item.beamline_name }}.epics_validator
SERVICE_VERSION={{ epics_buffer_container_version }}
REDIS_HOST=127.0.0.1
REDIS_PORT={{ item.redis_port }}
BROKER_HOST=127.0.0.1
taskset -c {{ item.validator_cores }} \
docker run --rm --net=host \
--name="${SERVICE_NAME}" \
-e SERVICE_NAME="${SERVICE_NAME}" \
-e BROKER_HOST="${BROKER_HOST}" \
-e REDIS_HOST="${REDIS_HOST}" \
-e REDIS_PORT="${REDIS_PORT}" \
-v /sf/{{ item.beamline_name }}/data:/sf/{{ item.beamline_name }}/data \
-v /gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }}-staff:/gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }}-staff \
-v /gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }}:/gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }} \
docker.io/paulscherrerinstitute/std-daq-service:"${SERVICE_VERSION}" \
epics_validator \
sf.{{ item.beamline_name }}.epics_writer
--log_level {{ epics_buffer_log_level }}
@@ -1,14 +0,0 @@
[Unit]
Description={{ item.beamline_name }} Epics writer
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/sf.{{ item.beamline_name }}.epics_writer.sh
ExecStop=/bin/bash docker stop sf.{{ item.beamline_name }}.epics_writer
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
@@ -1,26 +0,0 @@
#!/bin/bash
SERVICE_NAME=sf.{{ item.beamline_name }}.epics_writer
SERVICE_VERSION={{ epics_buffer_container_version }}
REDIS_HOST=127.0.0.1
REDIS_PORT={{ item.redis_port }}
BROKER_HOST=127.0.0.1
# start redis docker, if it's not running
/home/dbe/service_scripts/sf-redis.start.sh {{ item.beamline_name }} ${REDIS_PORT}
taskset -c {{ item.writer_cores }} \
docker run --rm --net=host \
--name="${SERVICE_NAME}" \
-e SERVICE_NAME="${SERVICE_NAME}" \
-e BROKER_HOST="${BROKER_HOST}" \
-e REDIS_HOST="${REDIS_HOST}" \
-e REDIS_PORT="${REDIS_PORT}" \
-v /sf/{{ item.beamline_name }}/data:/sf/{{ item.beamline_name }}/data \
-v /gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }}-staff:/gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }}-staff \
-v /gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }}:/gpfs/photonics/{{ item.beamline_storage }}/raw/{{ item.beamline_name }} \
docker.io/paulscherrerinstitute/std-daq-service:"${SERVICE_VERSION}" \
epics_writer \
--tag epics_{{ item.beamline_name }} \
--log_level {{ epics_buffer_log_level }}
@@ -1,55 +0,0 @@
#!/bin/bash
# needed, otherwise executing with Ansible won't work
# see: https://github.com/conda/conda/issues/7267
unset SUDO_UID SUDO_GID SUDO_USER
if [ ! -f /home/dbe/miniconda3/bin/conda ]
then
echo "Getting Miniconda"
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh -b -p /home/dbe/miniconda3
rm -rf Miniconda3-latest-Linux-x86_64.sh
fi
# Setup the conda environment.
source /home/dbe/miniconda3/etc/profile.d/conda.sh
CONDA_ENV_NAME=sf-daq
envtest=$(conda env list | grep ${CONDA_ENV_NAME})
if [ $? != 0 ]; then
echo "Creating the ${CONDA_ENV_NAME} environment"
conda install -n base conda-libmamba-solver -y
conda config --set channel_priority disabled
conda config --set solver libmamba
conda create -y -n ${CONDA_ENV_NAME} -c paulscherrerinstitute -c conda-forge data_api jungfrau_utils
conda deactivate
conda activate ${CONDA_ENV_NAME}
conda install -y -c conda-forge bottle pika ujson
conda install -y -c slsdetectorgroup -c conda-forge slsdet=6.1.1
conda install -y -c paulscherrerinstitute -c conda-forge pyepics
else
conda deactivate
conda activate ${CONDA_ENV_NAME}
fi
if [ ! -d /home/dbe/git ]; then
echo "No git repo found, cloning it..."
mkdir /home/dbe/git
fi
REPO=sf_daq_broker
if [ ! -d /home/dbe/git/${REPO} ]
then
cd /home/dbe/git && git clone \
https://github.com/paulscherrerinstitute/${REPO}.git
echo "Setting up develop"
cd /home/dbe/git/${REPO} && python setup.py develop
fi
@@ -1,19 +0,0 @@
#!/bin/bash
# needed, otherwise executing with Ansible won't work
# see: https://github.com/conda/conda/issues/7267
unset SUDO_UID SUDO_GID SUDO_USER
if [ ! -d /home/dbe/git ]; then
echo "No git repo found, cloning it..."
mkdir /home/dbe/git
fi
REPO=sf_daq_buffer
if [ ! -d /home/dbe/git/${REPO} ]; then
cd /home/dbe/git && git clone https://github.com/paulscherrerinstitute/${REPO}.git
source /opt/rh/devtoolset-9/enable
cd /home/dbe/git/${REPO} && mkdir -p build && cd build/ && cmake3 .. && make
fi
@@ -1,19 +0,0 @@
#!/bin/bash
# needed, otherwise executing with Ansible won't work
# see: https://github.com/conda/conda/issues/7267
unset SUDO_UID SUDO_GID SUDO_USER
if [ ! -d /home/dbe/git ]; then
echo "No git repo found, cloning it..."
mkdir /home/dbe/git
fi
REPO=sf_daq_buffer
if [ ! -d /home/dbe/git/${REPO} ]; then
cd /home/dbe/git && git clone https://github.com/paulscherrerinstitute/${REPO}.git
source /opt/rh/devtoolset-9/enable
cd /home/dbe/git/${REPO} && mkdir -p build && cd build/ && cmake3 .. && make
fi
-15
View File
@@ -1,15 +0,0 @@
[Unit]
Description=stream service (to streamvis and live analysis) of {{ detector }}
[Service]
PermissionsStartOnly=true
Type=idle
User=root
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-stream.sh
TimeoutStartSec=10
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
-10
View File
@@ -1,10 +0,0 @@
#!/bin/bash
coreAssociated="{{ stream_cores }}"
CONFIG=/gpfs/photonics/swissfel/buffer/config/{{ stream_config }}
SERVICE={{ detector }}-stream
/home/dbe/git/sf_daq_buffer/scripts/check_config_changed.sh ${CONFIG} ${SERVICE} &
taskset -c ${coreAssociated} /home/dbe/bin/sf_stream ${CONFIG}
@@ -1,16 +0,0 @@
[Unit]
Description=stream service to streamvis of {{ detector }}
[Service]
PermissionsStartOnly=true
Type=idle
User=root
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-stream2vis.sh
TimeoutStartSec=10
Restart=on-failure
RestartSec=2
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
-11
View File
@@ -1,11 +0,0 @@
#!/bin/bash
coreAssociated="{{ stream2vis_cores }}"
#CONFIG=/gpfs/photonics/swissfel/buffer/config/{{ detector_config }}
CONFIG={{ detector_config }}
SERVICE={{ detector }}-stream2vis
/home/dbe/service_scripts/check_config_changed.sh ${CONFIG} ${SERVICE} &
taskset -c ${coreAssociated} /home/dbe/bin/sf_stream ${CONFIG} vis
@@ -1,13 +0,0 @@
[Unit]
Description=streamvis: {{ detector }}
[Service]
User=root
TimeoutStartSec=2
ExecStart=/bin/bash ./home/dbe/service_scripts/{{ detector }}-vis.sh
Restart=on-failure
RestartSec=4
[Install]
WantedBy=multi-user.target
-20
View File
@@ -1,20 +0,0 @@
#!/bin/bash
source /home/dbe/miniconda3/etc/profile.d/conda.sh
conda deactivate
conda activate vis
PORT={{ visualisation_port }}
PORT_BACKEND={{ visualisation_incoming_data_port }}
H=`echo ${HOSTNAME} | sed 's/.psi.ch//'`
BACKEND=${H}
CORES="{{ visualisation_cores }}"
taskset -c ${CORES} \
streamvis {{ visualisation_view }} --allow-websocket-origin=${H}:${PORT} --allow-websocket-origin={{ visualisation_alias }}:${PORT} \
--port=${PORT} --address tcp://${BACKEND}:${PORT_BACKEND} \
--page-title {{ visualisation_title }}
@@ -1,29 +0,0 @@
#!/bin/bash
# needed, otherwise executing with Ansible won't work
# see: https://github.com/conda/conda/issues/7267
unset SUDO_UID SUDO_GID SUDO_USER
if [ ! -f /home/dbe/miniconda3/bin/conda ]
then
echo "Getting Miniconda"
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh -b -p /home/dbe/miniconda3
rm -rf Miniconda3-latest-Linux-x86_64.sh
fi
# Setup the conda environment.
source /home/dbe/miniconda3/etc/profile.d/conda.sh
envtest=$(conda env list | grep vis)
if [ $? != 0 ]; then
echo "Creating the vis environment"
conda install -n base conda-libmamba-solver -y
conda config --append channels conda-forge
conda config --append channels paulscherrerinstitute
conda config --set channel_priority disabled
conda config --set solver libmamba
conda create -n vis -y streamvis={{ streamvis_version }}
fi
@@ -1,5 +0,0 @@
[[inputs.exec]]
timeout = "10s"
interval = "10s"
data_format = "influx"
commands = ["sudo /usr/local/bin/telegraph_feed.sh {{ detector_full_name }}"]
@@ -1,32 +0,0 @@
#!/bin/bash
if [ $# -ne 1 ]
then
echo "Usage : $0 DETECTOR"
echo "Example : $0 JF06T32V02"
exit
fi
DETECTOR=$1
DS=`echo ${DETECTOR} | cut -c 3-4`
if [ ${DETECTOR} == JF06T08V04 ]
then
DS=06_4M
fi
NM=`echo ${DETECTOR} | cut -c 6-7`
NM=`expr ${NM} - 1`
for SERVICE in udp_recv buffer_writer
do
for m in `seq -w 00 ${NM}`
do
journalctl -u JF${DS}-${SERVICE}-worker@${m} -n 1 | tail -1 | awk -F ': ' '{print $2}' | sed 's/-/_/g' | grep "^jf"
done
done
journalctl -u JF${DS}-stream2vis -n 1 | tail -1 | awk -F ': ' '{print $2}' | sed 's/-/_/g' | grep "^sf"
exit 0
@@ -1,17 +0,0 @@
[Unit]
Description={{ detector }} UDP receiver worker instance as a service, instance %i
Requires={{ detector }}-udp_recv.service
Before={{ detector }}-udp_recv.service
BindsTo={{ detector }}-udp_recv.service
[Service]
PermissionsStartOnly=true
Type=idle
User=root
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-udp_recv-worker.sh %i
TimeoutStartSec=10
RestartSec=1
[Install]
WantedBy={{ detector }}-udp_recv.service
@@ -1,18 +0,0 @@
#!/bin/bash
if [ $# != 1 ]
then
systemctl start {{ detector }}-udp_recv-worker@{00..{{ last_module_number}}}
exit
fi
M=$1
coreAssociatedBuffer=({{ cores_udp_recv_receivers }})
initialUDPport={{ initial_udp_port }}
port=$((${initialUDPport}+10#${M}))
DETECTOR={{ detector_full_name }}
taskset -c ${coreAssociatedBuffer[10#${M}]} /home/dbe/bin/jf_udp_recv {{ detector_config }} ${M}
@@ -1,11 +0,0 @@
[Unit]
Description=All UDP receiver instances of {{ detector }}
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /home/dbe/service_scripts/{{ detector }}-udp_recv-worker.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target