Most modern processor architecture uses a hierarchical cache structure (from private to each core to shared) and spatio-temporal principle of caching. Last generally means that when accessing data in memory will be the next call to the nearby area of ??the RAM, it remains a high probability of repeated access to the requested data. However, the amount of personal cache buffer is very limited, and if it is necessary to fill the data in the main memory dump. All this takes time and energy. The developers believe that if you make these mechanisms new algorithms that work with the cache memory can be accelerated by 15% while reducing the consumption operations cache to 25%.
The conference IEEE International Symposium on Computer Architecture prepared a report that sheds light on some new algorithms cores with cache memory. Thus, in the case of filling the cache first level scientists suggest half of the data dump into a shared cache, which in any case will give faster access than RAM. Second reception for the work of the two core with a common data block that requires constant synchronization between them. If you go to alternately handling the cache, it eliminates the computational cost of the synchronization. Finally, areas of shared cache, which store data requested on separate cores, is proposed that as a logical continuation of the first cache level of specific core. Among other things, reduced the number of teams and operations that also have a positive impact on the speed and power consumption of processors. Related Products :
|