The memory-processor speed gap has grown so large that in modern systems accessing the main memory requires hundreds of processor cycles. Traditionally, a cache hierarchy is inserted between processor and memory to narrow the speed gap. However, since a cache has no knowledge about future references, data is stored at all cache levels, even if it exhibits no locality. Recently, EPIC architectures introduced cache hints which allow to specify the cache level where data is stored. In this way it is possible to adapt the allocation and replacement strategy based on the locality of the instruction. In order to exploit cache hints, a compiler algorithm is proposed which calculates the locality of memory accesses. When there is little locality for a given cache level, the data is not stored at this level, which reduces cache pollution. The goal is to store the data at the lowest cache level where it will stay at least until the next access.