In order to interact with High Throughput Compute (HTC) resources, you should have access to a
User Interface
, often referred to as aUI
. This software environment will provide all the tools required to interact with the different middleware, as different sites can be using different Computing Element (CE), such as HTCondorCE and ARC-CE (CREAM is a legacy software stack that is not officially supported).
The UI contains a suite of clients and APIs that users and applications can use to access High Throughput Compute services.
It will also install the IGTF distribution.
The package relies on packages available in the following repositories:
Once the UI will be installed, you will need to set it up so to be able to interact with the resources available to a given Virtual Organisation (VO).
The UI is available as a package in the UMD software distribution, but it will also require additional software and configuration.
In order to help with deploying the UI, different solutions are possible:
-
Deploying the UI manually, using the packages available from UMD repositories. Once the repositories are configured by install the
umd-release
package, install theui
meta-package, and configure the system to interact with the VOMS servers of the VO to be used.# Install EPEL repository $ dnf install -y epel-release # Install UMD repositories, look for available UMD release on https://repository.egi.eu/ # FIXME: As of 2024-08, fall back on WLCG + upstreams repositories in place of UMD repo $ dnf install -y https://linuxsoft.cern.ch/wlcg/el9/x86_64/wlcg-repo-1.0.0-1.el9.noarch.rpm $ dnf install -y https://research.cs.wisc.edu/htcondor/repo/23.x/htcondor-release-current.el9.noarch.rpm $ dnf install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm $ dnf config-manager --set-enabled crb $ dnf localinstall -y ui-*.rpm
-
Some Ansible roles are available in the EGI Federation GitHub organisation, mainly ansible-role-ui that should be used together with ansible-role-VOMS-client, providing software and material required for the authentication and authorisation, and ansible-role-umd configuring the software repositories from where all the software will be installed.
-
The repository ui-deployment provides a terraform-based deployment allowing to deploy a
User Interface (UI)
in a Cloud Compute virtual machine. This integrated deployment is based on the Ansible modules, and should be adjusted to your environment and needs.
If you have installed the ui
meta-package manually, from
UMD repository, you need to configure the support
of the VO(s) you want to use on the UI.
- Look for the VO ID card for the VO you want to use on the
Operations Portal
- You can also infer the URL from the VO name: for
dteam
the VO ID Card is available at https://operations-portal.egi.eu/vo/view/voname/dteam.
- You can also infer the URL from the VO name: for
- Access the VO-specific VOMS server, the VOMS server should be the one
mentioned in the
Registry Information
section of the VO ID card. For your convenience, you should be able to use the link in theEnrolment URL
.- For
dteam
it's https://voms2.hellasgrid.gr:8443/voms/dteam/.
- For
- Once on the VOMS server, open the section
Configuration Info
.- For
dteam
it's the page https://voms2.hellasgrid.gr:8443/voms/dteam/configuration/configuration.action.
- For
The VOMS configuration pages contains the information required to configure your UI so that it can interact with the VOMS server for your VO.
-
As an example with
dteam
VO, you can find the VOMS server address in the dteam VO ID card. -
Then looking at dteam VOMS' Configuration page, you can create:
-
/etc/vomsdir/<vo-name>/<voms-hostname>.lsc
, adjusting the file name according to the VO.-
For
dteam
, the VOMS server isvoms2.hellasgrid.gr
, so the file would be named/etc/grid-security/vomsdir/dteam/voms2.hellasgrid.gr.lsc
with the content for the LSC configuration./C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr /C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2016
-
-
/etc/vomses/<vo-name>-<voms-hosntame>
file, adjusting the file name according to the VO-
For
dteam
, the VOMS server isvoms2.hellasgrid.gr
, so the file would be named/etc/vomses/dteam-voms2.hellasgrid.gr
with the content of the VOMSES string."dteam" "voms2.hellasgrid.gr" "15004" "/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr" "dteam"
-
-
If you cannot edit content in /etc/vomses
and /etc/grid-security/vomsdir
,
you can respectively use ~/.glite/vomses
and ~/.glite/vomsdir
. You may have
to export X509_VOMSES
and X509_VOMS_DIR
in your shell, as documented
on CERN's twiki:
$ export X509_VOMSES=~/.glite/vomses
$ export X509_VOMS_DIR=~/.glite/vomsdir
If you are using Ansible, the following roles can be used:
- egi_federation.ansible_role_umd, to configure the UMD repository
- egi_federation.ansible_role_voms-client, to configure the VOMS client for all known production VOs
- egi_federation.ui, to configure the UI.
The repository ui-deployment
provides a terraform based deployment allowing to deploy
a User Interface (UI)
in a
Cloud Compute virtual machine.
This integrated deployment is based on the Ansible modules, and should be
adjusted to your environment and needs.
The required build dependencies are:
- rpm-build
- make
- rsync
# Checkout tag to be packaged
$ git clone https://github.com/EGI-Federation/ui-metapackage.git
$ cd ui-metapackage
$ git checkout X.X.X
# Building in a container
$ docker run --rm -v $(pwd):/source -it almalinux:9
[root@bc96d4c5a232 /]# dnf install -y rpm-build make rsync rpmlint systemd-rpm-macros
[root@bc96d4c5a232 /]# cd /source && make rpm
[root@bc96d4c5a232 /]# rpmlint --file .rpmlint.ini build/RPMS/x86_64/*.rpm
The RPM will be available into the build/RPMS
directory.
- Prepare a changelog from the last version, including contributors' names
- Prepare a PR with
- Updating version and changelog in
ui.spec
- Updating version and changelog in
CHANGELOG
- Updating version and changelog in
- Once the PR has been merged, publish a new release using GitHub web interface
- Suffix the tag name to be created with
v
, likev1.0.0
- Packages will be built using GitHub Actions and attached to the release page
- Suffix the tag name to be created with
This work started under the EGEE project. This is now hosted on GitHub, and maintained by the EGI Federation.