Computing services unit

HeaderComputerServicesUnit



This unit manages the institute computational resources. It aims at configuring and managing software and hardware in the most suitable way for our scientists.


TASKS

The unit takes care of the following tasks:

Systems

  • Management and maintenance of hardware (computational cluster, database cluster, OpenNebula cluster, disk servers, disk cabinets, network switches, desktops, laptops and printers)
  • Installation, configuration and maintenance of computer software (operating system, compilers, libraries, office applications and other scientific and administrative software)
  • Deployment and management of OpenNebula for virtualization of services and for private cloud computing.

Computation and data mining:

  • Configuration and administration of HPC batch system.
  • Data gathering using both APIs and web scrapping. Data cleaning.
  • Administration of MongoDB distributed database and of data repository.
  • User advise (use of HPC cluster, efficient computation and data handling, visualization techniques and use of databases)

Web and multimedia:

  • Development of specific applications for IFISC web page.
  • Administration and technical maintenance of the institute web (public pages, inventory and intranet).
  • Development of tailored webpages for scientific conferences and meetings.
  • Management and maintenance of seminar broadcast system.



STAFF

IFISC Computational Services Unit is overall supervised by Prof. Pere Colet. IFISC web page is supervised by Prof. David Sánchez and data mining by José Javier Ramasco.


EQUIPMENT

Our computational resources include:

  • Nuredduna: A High Performance Computing (HPC) cluster integrated by 27 nodes with AMD Epyc Rome, Bergamo, Genoa and Turin processors with a total of 1680 cores and 16TB of RAM and with low-latency 25GbE communications which is used for big data analysis and memory intensive simulations. It also has configured 11 GPUS of models RTX 3090, L40s and RTX 6000 Ada. All resources are orchestrated by Slurm resource manager.
  • Database cluster: used for big data storage and management. Data is handled using MongoDB non-relational database including a primary node with 512GB of RAM and 42 TB SSD raw storage and a replica node 256GB of RAM and 40TB HD raw storage.
  • Private cloud: OpenNebula cluster to handle more than 70 virtual servers and docker containers. It is integrated by one management node and 5 compute nodes with a total of 180 cores, 2TB of RAM and 125TB SSD raw storage.
  • Data repository: Disk server with 220 TB of rotational raw storage capacity, 128 GB of RAM and 2 x 200GB SSD for cache, connected via 10Gbit DAC cable and powerred by ZFS.
  • NFS Home directories: Disk server with 126 TB of NVMe raw storage capacity, 378 GB of RAM and also powered by ZFS.
  • Backup infrastructure: 2 disk servers with with 450 TB of rotational raw storage capacity, 300 GB of RAM and powered by mongoDB and ZFS.

Transparent access to computational clusters and servers is provided through a fully integrated network of about 100 Linux desktops complemented by several windows desktops and iMacs and around 60 laptops.

The above equipment is complemented by a 44" plotter Epson Stylus Pro 9880, several color printers and multi-functional systems.

IFISC has also developed a specific system to life webcast seminars and to distribute the recordings on demand.


This web uses cookies for data collection with a statistical purpose. If you continue Browse, it means acceptance of the installation of the same.


More info I agree