Computing services unit

HeaderComputerServicesUnit


This unit manages the institute computational resources. It aims at configuring and managing software and hardware in the most suitable way for our scientists.


TASKS

The unit takes care of the following tasks:

Systems
  • Management and maintenance of hardware (computational cluster, database cluster, OpenNebula cluster, disk servers, disk cabinets, network switches, desktops, laptops and printers)
  • Installation, configuration and maintenance of computer software (operating system, compilers, libraries, office applications and other scientific and administrative software)
  • Deployment and management of OpenNebula for virtualization of services and for private cloud computing.
Computation and data mining:
  • Configuration and administration of HPC batch system.
  • Data gathering using both APIs and web scrapping. Data cleaning.
  • Administration of MongoDB distributed database and of data repository.
  • User advise (use of HPC cluster, efficient computation and data handling, visualization techniques and use of databases)
Web and multimedia:
  • Development of specific applications for IFISC web page.
  • Administration and technical maintenance of the institute web (public pages, inventory and intranet).
  • Development of tailored webpages for scientific conferences and meetings.
  • Management and maintenance of seminar broadcast system.



STAFF

IFISC Computational Services Unit is overall supervised by Prof. Pere Colet. IFISC web page is supervised by Prof. David Sánchez and data mining by José Javier Ramasco.


EQUIPMENT

Our computational resources include:

  • Nuredduna: A High Performance Computing (HPC) cluster integrated by 20 Atos Bull Sequana nodes with AMD Epyc Rome processors with a total of 960 cores and 12TB of RAM and with low-latency 25GbE communications which is used for big data analysis and memory intensive simulations. This is complemented by a High Throughput Computing (HTC) cluster with 46 IBM iDataPlex dx360M4 nodes and a total of 552 cores and 3.1TB of RAM and GbE communications which is used for less demanding calculations.
  • Database cluster: used for big data storage and management. Data is handled using MongoDB non-relational database including a primary node with 512GB of RAM and 42 TB SSD raw storage and a replica node 256GB of RAM and 40TB HD raw storage.
  • Private cloud: OpenNebula cluster to handle virtual servers and integrated by one management node and 6 compute nodes with a total of 180 cores, 1.7TB of RAM and 70 TB SSD raw storage.
  • Data repository: IBM DS4700 disk cabinet with 80 TB of raw storage capacity, connected via fiber channel to 4 dx360M2 servers.
  • NFS server: with 80 TB of raw storage capacity, 128 GB of RAM and 4x256 GB SSD cache, powered by ZFS and used to store user’s home directories.
  • Data backup server: with 104 TB of raw storage capacity powered by ZFS with and used to back up home directories, the data repository, operating systems and configurations, and laptops user’s data.
  • Computational server: with 32 cores and 512GB of RAM used for memory intensive scientific calculations.

Transparent access to computational clusters and servers is provided through a fully integrated network of about 60 Linux desktops complemented by several windows desktops and iMacs and around 40 laptops.

The above equipment is complemented by a 44" plotter Epson Stylus Pro 9880, several color printers and multi-functional systems.

IFISC has also developed a specific system to life webcast seminars and to distribute the recordings on demand.



This web uses cookies for data collection with a statistical purpose. If you continue browsing, it means acceptance of the installation of the same.


More info I agree