The High Luminosity run of the Large Hadron Collider (HL-LHC), foreseen to start in ~2026, will increase the delivered luminosity by roughly an order of magnitude compared to its original design, allowing to collect a statistics of about 3000 fb^(-1) . This large amount of data will consent a deeper study of the Standard Model of particle physics, searching for physics beyond it. In order to sustain the high collision rate and higher radiation doses, as well as to increase the precise reconstruction of the charged track trajectories, a new Silicon Pixel detector will be installed in the CMS experiment. This will guarantee better impact parameter measurements to detect b-quarks and tau-leptons from the top-quark and Higgs-boson decays, which are believed to offer the best opportunities to investigate physics beyond the Standard Model. Such an improved Silicon Pixel detector will require unprecedented radiation tolerance (up to 1 Grad) and should offer "intelligent" data processing in the front-end chip, while keeping low power and low material budget. The readout chip will be developed in planar 65 nm technology. This is the first time that this technology will be used in High-Energy physics experiments, due to the complexity of the rules for designing the analogue and digital circuitries: it is more radiation tolerant and less power hungry with respect to technologies used so far and allows to develop more logic in the device area. This thesis describes the impact of the foreseen upgraded Silicon Pixel detector on the physics of CMS at HL-LHC through the precise measurement of charged track trajectories, by the reduction of the material budget using less readout cables through lossless data compression algorithms embedded in the pixel readout chip. The outcome is not only the reduction of the material budget due to the reduction of the number of cables, but also the further transmission stages can be reduced improving the power consumption of the silicon detector. Two lossless data compression techniques are studied, based on Huffman and arithmetic coding. The selection of these encoding methods was established by the implementation requirements. The data compression should be as simple as possible to avoid complex designs that lead to big and power consuming circuitry. In addition the compression should be very fast due to the very high readout rate of 750 MHz. The results give various solutions with compression ratios ranging between 1.2 and 2.9 depending on the implementation complexity. This activity has been developed within the projects INFIERI (FP7-PEOPLE-2012-ITN, project number 317446), CHIPIX65 and RD53, respectively funded by EU, INFN and CERN.
Scheda prodotto non validato
Scheda prodotto in fase di analisi da parte dello staff di validazione