Publications

Export 53 results:
Sort by: [ Author  (Asc)] Title Type Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
D
Dias, R. J., T. M. Vale, and J. M. Lourenço, "In-Place Metainfo Support in DeuceSTM", Talk presented at the 2nd Euro-TM Workshop on Transactional Memory, Bern, Switzerland, 2012. Abstractwtm12-presentation.pptx_.pdf

Several Software Transactional Memory (STM) algorithms have been developed in the last few years. These algorithms differ in many different properties, such as progress and safety guarantees, contention management, etc. Some STM frameworks (DSTM2, DeuceSTM) were developed to allow the implementation and comparison of many different algorithms using the same transactional interface. The information required by an STM algorithm is either associated with the execution of a transaction (frequently referred as transaction descriptor), or associated with each memory location (or object reference) accessed within the transaction. The transaction descriptor is typically stored in a thread-local memory space and maintains the information necessary to validate and commit a memory transaction. The information associated with each memory location depends on the nature of the STM algorithm, which we will henceforth refer to as metadata, and may be composed by e.g. locks, timestamps or version lists. Since metadata is associated with each memory location, there are two main strategies regarding the location for storing such metadata: The metadata is stored either near each memory location (in-place strategy); or in a mapping table that associates the metadata with the corresponding memory location (out- place strategy). DeuceSTM is one of the most efficient STM frameworks available for the Java programming language and provides a well defined interface that allows to implement several STM algorithms using an out-place strategy. This work describes the adaptation and extension of DeuceSTM to support the in-place metadata strategy. Our approach complies to the following properties: Efficiency –- The STM implementation does not rely on an auxiliary mapping table, thus providing faster direct access to the transactional metadata; Non-transactional code is oblivious to the presence of metadata in objects, hence there is no performance overhead whatsoever for non-transactional code; Transactional code avoids the extra memory dereference imposed by the decorator pattern; Primitive types are fully supported, even in transactional code; And we provide a solution for fully supporting transactional N-dimensional arrays, which again impose no overhead on non-transactional code. Transparency –- The STM implementation automatically identifies, creates and initializes the necessary additional metadata fields in the objects; Non-transactional code is oblivious to the presence of metadata in objects, hence no code changes are required; The new transactional array types (supporting metadata for individual cells) are compatible with the standard arrays, hence not requiring any pre-/post- processing of the arrays when invoking standard or third-party non-transactional libraries. Performance benchmarking has confirmed that our approach for supporting in-place metadata in DeuceSTM, alongside with the existing out-place strategy, is a valid functional complement, widening the range of algorithms that can be efficiently implemented in DeuceSTM, enabling their fair comparison in a common infrastructure.

Dias, R. J., D. Distefano, J. M. Lourenço, and J. C. Seco, StarTM: Automatic Verification of Snapshot Isolation in Transactional Memory Java Programs, , no. UNL-DI-6-2011, Lisboa, Departamento de Informática FCT/UNL, 2011. Abstractddls11.pdf

This paper presents StarTM , an automatic verification tool for transactional memory Java programs executing under relaxed isolation levels. We certify which transactions in a program are safe to execute under Snapshot Isolation without triggering the write-skew anomaly, opening the way to run-time optimizations that may lead to considerable performance enhancements.
Our tool builds on a novel shape analysis technique based on Separation Logic to statically approximate the read- and write-sets of a transactional memory Java program. This technique is particularly challenging due to the presence of dynamically allocated memory.
We implement our technique and apply our tool to a set of intricate examples. We corroborate known results, certifying some of the examples for safe execution under Snapshot Isolation by proving the absence of write-skew anomalies. In other cases we identify transactions that potentially trigger the write-skew anomaly.

Dias, R. J., D. Distefano, J. C. Seco, and J. M. Lourenço, "Verification of Snapshot Isolation in Transactional Memory Java Programs", ECOOP 2012 – Object-Oriented Programming, vol. 7313, Beijing, China, Springer Berlin Heidelberg, pp. 640-664, 2012. Abstract2012-ecoop.pdf

This paper presents an automatic verification technique for transactional memory Java programs executing under snapshot isolation level. We certify which transactions in a program are safe to execute under snapshot isolation without triggering the write-skew anomaly, opening the way to run-time optimizations that may lead to considerable performance enhancements. Our work builds on a novel deep-heap analysis technique based on separation logic to statically approximate the read- and write-sets of a transactional memory Java program. We implement our technique and apply our tool to a set of micro benchmarks and also to one benchmark of the STAMP package. We corroborate known results, certifying some of the examples for safe execution under snapshot isolation by proving the absence of write-skew anomalies. In other cases our analysis has identified transactions that potentially trigger previously unknown write-skew anomalies.

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Framework Support for the Efficient Implementation of Multi-Version Algorithms", Transactional Memory: Foundations, Algorithms, Tools, and Applications: Springer, 2014. Abstractdias_vale_lourenco.pdf

Software Transactional Memory algorithms associate metadata with the memory locations accessed during a transaction’s lifetime. This metadata may be stored in an external table and accessed by way of a function that maps the address of each memory location with the table entry that keeps its metadata (this is the out-place or external scheme); or alternatively may be stored adjacent to the associated memory cell by wrapping them together (the in-place scheme). In transactional memory multi-version algorithms, several versions of the same memory location may exist. The efficient implementation of these algorithms requires a one-to-one correspondence between each memory location and its list of past versions, which is stored as metadata. In this chapter we address the matter of the efficient implementation of multi-version algorithms in Java by proposing and evaluating a novel in-place metadata scheme for the Deuce framework. This new scheme is based in Java Bytecode transformation techniques and its use requires no changes to the application code. Experimentation indicates that multi-versioning STM algorithms implemented using our new in-place scheme are in average 6× faster than when implemented with the out-place scheme.

Dias, R. J., J. M. Lourenço, and N. Preguiça, "Efficient and Correct Transactional Memory Programs Combining Snapshot Isolation and Static Analysis", Proceedings of the 3rd USENIX Conference on Hot Topics in Parallelism (HotPar'11), Berkeley, USA, Usenix Association, May, 2011. Abstract2011-hotpar.pdf

Concurrent programs may suffer from concurrency anomalies that may lead to erroneous and unpredictable program behaviors. To ensure program correctness, these anomalies must be diagnosed and corrected. This paper addresses the detection of both low- and high-level anomalies in the Transactional Memory setting. We propose a static analysis procedure and a framework to address Transactional Memory anomalies. We start by dealing with the classic case of low-level dataraces, identifying concurrent accesses to shared memory cells that are not protected within the scope of a memory transaction. Then, we address the case of high-level dataraces, bringing the programmer's attention to pairs of memory transactions that were misspecified and should have been combined into a single transaction. Our framework was applied to a set of programs, collected form different sources, containing well known low- and high-level anomalies. The framework demonstrated to be accurate, confirming the effectiveness of using static analysis techniques to precisely identify concurrency anomalies in Transactional Memory programs.

Dias, R. J., V. Pessanha, and J. M. Lourenço, "Precise Detection of Atomicity Violations", Hardware and Software: Verification and Testing, vol. 7857: Springer Berlin / Heidelberg, pp. 8-23, 2013. Abstract2012-hvc.pdf

n/a

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Efficient Support for In-Place Metadata in Transactional Memory", Euro-Par 2012 Parallel Processing, vol. 7484, Rhodes, Greece, Springer Berlin Heidelberg, pp. 589–600, 2012. Abstract2012-europar.pdf

Implementations of Software Transactional Memory (STM) algorithms associate metadata with the memory locations accessed during a transaction’s lifetime. This metadata may be stored either in-place, by wrapping every memory cell in a container that includes the memory cell itself and the corresponding metadata; or out-place (also called external), by resorting to a mapping function that associates the memory cell address with an external table entry containing the corresponding metadata. The implementation techniques for these two approaches are very different and each STM framework is usually biased towards one of them, only allowing the efficient implementation of STM algorithms following that approach, hence inhibiting the fair comparison with STM algorithms falling into the other. In this paper we introduce a technique to implement in-place metadata that does not wrap memory cells, thus overcoming the bias by allowing STM algorithms to directly access the transactional metadata. The proposed technique is available as an extension to the DeuceSTM framework, and enables the efficient implementation of a wide range of STM algorithms and their fair (unbiased) comparison in a common STM infrastructure. We illustrate the benefits of our approach by analyzing its impact in two popular TM algorithms with two different transactional workloads, TL2 and multi-versioning, with bias to out-place and in-place respectively.

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Write-Write Conflict Detection for Distributed TM Systems", Talk presented at the 1st Euro-TM Workshop on Distributed Transactional Memory (WDTM'12), Lisbon, Portugal, 2012. Abstractwdstm-2012.pdf

Typical Distributed Transactional Memory (DTM) systems provide serializability by detecting read-write conflicts between accesses to shared data objects. Detecting read-write conflicts requires the bookkeeping of all the read and write accesses to the shared objects made inside the transaction, and the validation of all these accesses in the end of a transaction. For most DTM algorithms, the validation of a transaction requires the broadcast of its read-set to all the nodes of the distributed system during the commit phase. Since applications tend to read more than write, the overhead associated with the bookkeeping of read accesses, and the amount of network traffic generated during the commit, is a main performance problem in DTM systems. Database systems frequently rely on weaker isolation models to improve performance. In particular, Snapshot Isolation (SI) is widely used in industry. An interesting aspect of SI is that only write-write conflicts are checked at commit time and considered for detecting conflicting transactions. As main result, a DTM using this isolation model does not need to keep track of the read accesses, considerably reducing the bookkeeping overhead, and also eliminates the need of broadcasting the read-set at transaction commit time, thus reducing the network traffic and considerably increase the scalability of the whole system. By only detecting write-write conflicts, this isolation model allows a much higher commit rate, which comes at the expense of allowing some real conflicting transactions to commit. Thus, relaxing the isolation of a transactional program may lead previously correct programs to misbehave due to the anomalies resulting from malign data-races that are now allowed by the relaxed transactional run-time. These anomalies can be precisely characterized, and are often called in the literature as write-skew anomalies. Write-skew anomalies can be avoided dynamically with the introduction of algorithms that verify the serializability of running transactions, which may result in a non-negligible overhead. Our approach is to avoid the run-time overhead by statically asserting that a DTM program will execute without generating write-skew anomalies at runtime, while only verifying write-write conflicts. Conflicting transactions can be made “safe” by code rewriting using well known techniques. Our verification technique uses separation logic, a logic for verifying intensive heap manipulation programs, which was extended to construct abstract read- and write-sets for each transaction in the DTM program. This verification technique is implemented in the StarTM tool, which analyzes transactional Java bytecode programs. The transactions that cannot be verified by our tool are executed under serializable isolation (detecting read-write conflicts), while for the “safe” transactions we only detect write-write conflicts. The results of our experiments when verifying micro-benchmarks, such as linked lists and binary trees, and partially verifying some macro-benchmarks, such as the STAMP benchmark, strongly encourage the ongoing work for the application of this technique to DTM.

Dias, R. J., J. C. Seco, and J. M. Lourenço, Detection of Snapshot Isolation Anomalies in Software Transactional Memory: A Static Analysis Approach, : Universidade Nova de Lisboa, UNL-DI-5-2011. 2011-dias.pdf
Dias, R. J., T. M. Vale, and J. M. Lourenço, "Efficient Support for In-Place Metadata in Java Software Transactional Memory", Concurrency and Computation: Practice & Experience, vol. 25, issue 17: Wiley, pp. 2394–2411, 2013. Abstract2013-ccpe.pdf

n/a

F
Farchi, E., I. Segall, J. M. Lourenço, and D. G. Sousa, "Using Program Closures to Make an Application Programming Interface (Api) Implementation Thread Safe", Proceedings of the 2012 Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, Minneapolis, MN, USA, ACM, pp. 18–24, 2012. Abstract2012-padtad.pdf

Consider a set of methods implementing an Application Programming Interface (API) of a given library or program module that is to be used in a multithreaded setting. If those methods were not originally designed to be thread safe, races and deadlocks are expected to happen. This work introduces the novel concept of program closure and describes how it can be applied in a methodology used to make the library or module implementation thread safe, by identifying the high level data races introduced by interleaving the parallel execution of methods from the API. High-level data races result from the misspecification of the scope of an atomic block, by wrongly splitting it into two or more atomic blocks sharing a data dependency. Roughly speaking, the closure of a program P, clos(P), is obtained by incrementally adding new threads to P in such a way that enables the identification of the potential high level data races that may result from running P in parallel with other programs. Our model considers the methods implementing the API of a library of program module as concurrent programs and computes and analyses their closure in order to identify high level data races. These high level data races are inspected and removed to make the interface thread safe. We illustrate the application of this methodology with a simple use case.

L
Lourenço, J. M., D. Sousa, B. C. Teixeira, and R. J. Dias, "Detecting concurrency anomalies in transactional memory programs", Comput. Sci. Inf. Syst., vol. 8, no. 2, pp. 533–548, 2011. Abstract2011-comsis.pdf

Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM). This paper discusses how to support revertible operations, such as memory allocation and release, within software libraries that will be used in software memory transactional contexts. The proposal is based in the extension of the transaction life cycle state diagram with new states associated to the execution of user-defined handlers. The proposed approach is evaluated in terms of functionality and performance by way of a use case study and performance tests. Results demonstrate that the proposal and its current implementation are flexible, generic and efficient

Luís, J. E., "TxBtrfs — A Transactional Snapshot-based File System", FCT - Universidade Nova de Lisboa: Universidade Nova de Lisboa, 2011. Abstract2011-joao_luis.pdf

Several decades ago, the file system was the container of choice for large bulks of related information, kept in hundreds of files, and relying on applications specifically created to handle them. These configurations weren't scalable and could easily become difficult to maintain, leading to the development and adoption of Database Management Systems (DBMS). These systems, capable of efficiently handling vast amounts of data, allowed heavy concurrency without requiring the programmer to deal with concurrency-control mechanisms, by encapsulating operations within transactions.
The properties of Transactions rapidly became an object of desire by many, and efforts to bring them to general-purpose programming environments began. In recent years there have been breakthroughs in bringing the transactional semantics to memory, using Software Transactional Memory (STM), providing abstractions to concurrency-control on the application-level. However, STM failed to meet some expectations, specially regarding I/O operations, forcing the abstraction to go deeper in the system: directly to the file system.
In this document we shall discuss file systems in general, their properties and common structure, although focusing in those with transactional or versioning capabilities. Later on, we will present our proposed enhancement of an existing Linux file system (Btrfs), in order to offer transactional semantics to applications, while detecting potential conflicts between concurrent flows of execution and reconciling their changes whenever possible.

Luís, J. E., J. M. Lourenço, and P. A. Lopes, "Suporte Transaccional para o Sistema de Ficheiros Btrfs", InForum 2011: Proceedings of InForum Simpósio de Informática, Coimbra, Universidade do Coimbra, 2011. Abstract2011-inforum-jel.pdf2011-inforum-short-jel.pdf

Em caso de falha abrupta de um sistema, é imperativo garantir a consistência do Sistema de Ficheiros (SF). Actualmente existem várias solu{\c c}ões que visam garantir que tanto os dados como os metadados do SF se encontram num estado consistente, mas que não contemplam a garantia de consistência dos dados do ponto de vista das aplica{\c c}ões. Por exemplo, aplica{\c c}ões que pretendam alterar vários ficheiros de configura{\c c}ão terão de encontrar mecanismos para garantir que, ou todos os ficheiros são devidamente alterados, ou nenhum o é, evitando assim que numa situa{\c c}ão de falha o conteúdo dos ficheiros fique inconsistente. Do ponto de vista da aplica{\c c}ão, pode não ser simples implementar este comportamento sobre um SF t{\'ıpico; e pode também não ser razoável utilizar um Sistema de Gestão de Bases de Dados (SGBD), que oferece propriedades ACID. Neste artigo propomos, testamos e avaliamos uma integra{\c c}ão das propriedades ACID num SF. Partindo do suporte para snapshots do sistema de ficheiros Btrfs, oferece-se uma semântica transaccional às aplica{\c c}ões que operam sobre volumes (sub-árvores) do SF, sem comprometer a semântica POSIX do SF.

M
Martins, J., "Lightweight monitoring of transactional memory programs", Universidade Nova de Lisboa, 2013. 2013-joao_martins.pdf
Martins, H. R. L., J. Soares, J. M. Lourenço, and N. Preguiça, "Replicação Multi-nível de Bases de Dados em Memória", Proceedings of INForum Simpósio de Informática, Lisbon, Portugal, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, pp. 190–201, sep, 2013. Abstract2013-inforum-hm.pdf

n/a

Martins, H., "Distributed replicated macro-components", Universidade Nova de Lisboa, 2013. 2013-helder_martins.pdf
P
Pessanha, V., "Practical Verification of Anomalies in Transactional Memory Programs", FCT - Universidade Nova de Lisboa: Universidade Nova de Lisboa, 2011. Abstract2011-vasco_pessanha.pdf

Transactional Memory (TM) is an approach to concurrency control in general pur- pose programming languages that inherits the concept of transaction from the database setting. Unlike other language constructs such as locks, TM has an optimistic approach to concurrency control by allowing more than one thread to access simultaneously the same critical section. A transaction always executes as if it is alone in the system, and in the end its effects are undone (rolled back) if it conflicts with another concurrent transac- tions. In spite of the potential for increasing scalability and performance, TM is a recent and developing programming model and still has a very limited impact in real-world applications.
Designing and developing concurrent software is difficult and error prone. Concur- rent programs exhibit concurrency anomalies that originate faults and failures. Despite some claims that TM programs are less error prone, they still exhibit concurrency anoma- lies such as high-level dataraces, i.e., wrong delimitations of transactions’ scope, and stale-value errors, that occur when the value of a shared variable jumps from an atomic block to another.
Programs with this kind of anomalies can exhibit unpredictable and wrong behaviour, not fulfilling the goals for which they were conceived.
This work aims the detection of anomalies through static analysis of transactional Java ByteCode programs that execute in strong atomicity. A extensible and flexible framework is proposed, which can be extended with plugins that detect specific types of anomalies. With this framework we expect to prove that high-level dataraces and stale-value errors can be detected with reasonable precision through static analysis.

Keywords: Atomicity Violation, High-Level Datarace, Static Analysis, Concurrency, Software Transactional Memory

Pessanha, V., R. J. Dias, J. M. Lourenço, E. Farchi, and D. Sousa, "Practical verification of high-level dataraces in transactional memory programs", Proceedings of 9th the Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, New York, NY, USA, ACM, pp. 26–34, July, 2011. Abstract2011-padtad.pdf

In this paper we present MoTh, a tool that uses static analysis to enable the automatic verification of concurrency anomalies in Transactional Memory Java programs. Currently MoTh detects high-level dataraces and stale-value errors, but it is extendable by plugging-in sensors, each sensor implementing an anomaly detecting algorithm. We validate and benchmark MoTh by applying it to a set of well known concurrent buggy programs and by close comparison of the results with other similar tools. The results achieved so far are very promising, yielding good accuracy while triggering only a very limited number of false warnings.

Pina, L., L. Veiga, and M. Hicks, "Rubah: DSU for Java on a stock JVM", ACM Conference on Object-Oriented Programming Languages, Systems, and Applications (OOPSLA 2014): ACM, sep, 2014. Abstract2014-oopsla-pina-core-a.pdf

n/a

R
Rameshan, N., and L. Veiga, "RATS - Resource Aware Thread Scheduling for JVM-level Clustering", INFORUM 2012 - Simpósio de Informática, sep, 2012. Abstractinforum-2012-navaneeth.pdf

n/a

Rameshan, N., Efficient Thread Scheduling for Distributed Java VM, : Instituto Superior Técnico, 2012. Abstract2012-msc-ist-rameshan.pdf

In this work, we propose RATS, a middleware to enhance and extend the Terracotta framework for Java with the ability to transparently execute multi-threaded Java applications to provide a single-system image. It supports efficient scheduling of threads, according to available resources, across several nodes in a Terracotta cluster, taking advantage of the extra computational and memory resources available. It also supports profiling to gather application characteristics such as dispersion of thread workload, thread inter-arrival time and resource usage of the application. Profiling and clustering capabilities are inserted with the help of byte code instrumentations. We developed a range of alternative scheduling heuristics and classify them based on the application and cluster behavior. The middleware is tested with different applications with varying thread characteristics to assess and classify the scheduling heuristics with respect to application speed-ups. Results indicate that, for a CPU intensive application, it is possible to classify the scheduling heuristic based on application and cluster properties and also achieve linear speed ups. Furthermore, we show that a memory intensive application is able to scale its memory usage considerably when compared to running the application on a single JVM.