Software problems do not only induce high financial loss, but also sometimes induce human loss. Those problems are due to the presence of software bugs, failures, errors, and defects in software systems. These software anomalies, and in particular the software defects, have a huge impact not only on business activities but also on the cost of developing and maintaining these software systems. In order to identify their sources, particularly the ones causing severe impacts on the systems’ operations, we conducted two case studies. We analyzed software defects of two systems over a period of a year and a half. We classified these software defects, according to their trigger factors and according to their severity impact. Conducting these studies led us to propose “the origins of severe software defects method” order to identify trigger factors that cause severe software defects on a given evolving system. We also found that the group of technology trigger factors causes more severe defects than the other groups of trigger factors for this type of systems.
This thesis studies the value of responsiveness for a manufacturer. In practical terms, responsiveness allows a manufacturer to wait until actual customer demand can be observed. The benefits of this come from minimized waste, when less unwanted goods are produced, and increased sales as more of the actual demand can be met. If a manufacturer uses responsiveness to provide services, and for example, customization of products to its customers, responsiveness can be a strategic advantage in competition. This work contributes to the understanding of when the higher cost of responsiveness can be justified and manufacturer should invest in local capacity instead of low-cost offshore manufacturing.
The first essay, investigates an approach called the Volatility Portfolio and Option-based Costing. This approach suggests building a balanced portfolio of products with high and low time-sensitivity to maximize the benefits from responsiveness. This advice is contrary to the common intuitive solution. With four applications in cases from different industries, we show how the approach delivers value and how to move from theory to practice.
It has been shown that innovation follows manufacturing. Using company cases, I build a hypothesis of how responsiveness leads to higher innovation, because of local problem-solving and customer-driven innovations. The contribution from the essay, is that companies should think about learning and innovation as they make their production decisions.
Currently used models can systematically underestimate the value of lead time reduction. This can be significant for products with short sales period and a clearance price that varies in the share of unsold inventory. Third essay, demonstrates reasons for this underestimation and provides tools to fix it.
This thesis presents a behavioral economics contribution to the security of information systems. It focuses on security information sharing (SIS) between operators of critical infrastructures, such as systemic banks, power grids, or telecommunications. SIS is an activity by which these operators exchange cybersecurity-relevant information, for instance on vulnerabilities, malwares, data breaches, etc. Such information sharing is a low-cost and efficient way by which the defenders of such infrastructures can enhance cybersecurity. However, despite this advantage, economic (dis)incentives, such as the free-rider problem, often reduce the extent to which SIS is actually used in practice. This thesis responds to this problem with three published articles.
The first article sets out a theoretical framework that proposes an association between human behavior and SIS outcomes. The second article further develops and empirically tests this proposed association, using data from a self-developed psychometric survey among all participants of the Swiss Reporting and Analysis Centre for Information Assurance (MELANI). SIS is measured by a dual approach (intensity and frequency), and hypotheses on five salient factors that are likely associated with SIS outcomes (attitude, reciprocity, executional cost, reputation, trust) are tested. In the third article, policy recommendations are presented in order to reduce executional costs, which is found to be significantly and negatively associated with SIS. In conclusion, this thesis proposes multiple scientific and practical contributions. It extends the scientific literature on the economics of cybersecurity with three contributions on the human factor in SIS. In addition, regulators will find many recommendations, particularly in the area of governance, to support SIS at the legislative level. This thesis also offers many avenues for practitioners to improve the efficiency of SIS, particularly within Information Sharing and Analysis Centers (ISACs) in charge of producing Cyber Threat Intelligence in order to anticipate and prevent cyberrisks.
It is commonly acknowledged that business model innovation carries enormous opportunities for incumbent organizations, especially when driven by digital transformation. However, less is known and discussed about the challenges for off-line born organizations – i.e. established before the diffusion of the Internet - which attempt to tackle this journey of change. In this context, thanks to my research setting, based on the collaboration between the BISA team at UNIL and SAP AG, I contribute to the business model domain with two research streams.
First, I address the process of business model management, analyzing phases that go beyond business model design. I observe this process in practice, complementing the predominantly conceptual literature. I contribute to the research by identifying two approaches to business model management: a deterministic and waterfall approach, characterized by a high level of certainty and confidence by the management team; and a discovery-driven approach, in which numerous design and evaluation iterations are performed before business model implementation.
Second, I study the design of business models for connected products. Phenomena like internet of things and smart cities require a complex network of actors in which organizations, individuals, and objects exchange value. Existing business model representations are not fully capable of describing such networks, having rather generic elements and components. Therefore, I take a first step towards new means of representation, proposing a taxonomy of design elements to represent business models for cyber-physical systems, the combination of physical and computational processes at the foundation of connected products. The main contribution of this research is a specific set of actors’ roles, the type of value they exchange and perceive, as well as their dominance in the network.
Cross-boundary teams (those consisting of members across functions and organizations) have been considered in the past two decades as the most effective strategy for organizations to undertake complex and innovative projects. While these teams are well-equipped to undertake such projects as they can tap on their diverse set of knowledge and skills, these differences also increase the costs and difficulties of collaborating. In this dissertation, I relate the
design science research project I undertook to address three challenges that such teams encounter frequently, i.e. the challenge of coordinating, cooperating, and solving wicked problems.
The particularity of this dissertation is that I integrate works in information systems, psycholinguistics and sociology to develop both descriptive and prescriptive knowledge on cross-boundary challenges. The prescriptive knowledge consists (1) of two artifacts (i.e., the Coopilot App and the Team Alignment Map) for the challenges of coordination and cooperation, and (2) a design theory that helps designers develop visual inquiry tools. The descriptive knowledge consists of (1) a process model for team coordination through conversation and (2) a conceptual model that informs how team members can overcome the three challenges by entering a process of joint inquiry.
Overall, I argue that for future research to contribute with prescriptive guidance for the variety of challenges cross-boundary teams can encounter, cross-boundary teamwork should be conceived of as a process of joint inquiry. Through this dissertation I also provide an illustration of how design science researchers can contribute to the lack of prescriptive and actionable knowledge for cross-boundary teamwork.
Every day we make tens of guesses. How likely is Macron to win the second round of the French presidential elections of 2017? Is Brazil or Germany going to win the football world cup of 2018? Is Roger Federer or Grigor Dimitrov going to win the finals in the Rotterdam Open? This thesis investigates and models how people make such guesses. The first chapter addresses the first of those questions: How likely is something to happen? It demonstrates how we can extend an already existing theory of how we judge how likely something is so that it makes more refined predictions about our behavior. The rest of the thesis focuses on the other two questions: Which of two items is better on some dimension? The second chapter outlines how we can predict how long people will take to make such guesses and what brain regions will be active while they are making those guesses. The last chapter demonstrates that there are systematic differences between how people guess when faced with real-world questions and with typical, artificial, experimental questions. One major difference is that in the real world some objects are more familiar than other objects and people are likely to guess that the more familiar object is better. The results of this thesis help us to better understand how we make decisions and can be used by, for example, software application designers or marketing specialists to distribute information in a way that reduces the effort that we exert when faced with a decision.
Today, to find meeting points or information on public transportation, we frequently use our mobile devices and, more specifically, the location-based services installed on them. These applications are extremely convenient to use and help us on a daily basis. However, we sacrifice, sometimes without realizing it, our privacy by sharing our locations with location-based services, hence, giving private information about ourselves to companies that own these services. Indeed, information can be easily extracted from our location history, in particular, our frequently visited places, our hobbies, even our identity. This is more critical when these services, in order to create new content, build mobility-prediction models about our lives. These companies collect a large amount of personal data that can be used for commercial or malicious purposes. Consequently, it is crucial to create new algorithms and architectures that preserve our location privacy when we use location-based services. This thesis focuses on the issues exposed above. This is particularly relevant today when we know that the new European General Data Protection Regulation (EU GDPR), which aims at preserving the privacy of individuals, came into effect on May, 25th 2018, and this implies that European and some Swiss companies must be compliant with it.
In recent years, cyber operations and malicious cyber activities have become common means of achieving strategic national interests. Their increasingly disruptive effects, which have destabilized international peace and security and fueled geopolitical instability, have catapulted cyber risks to the national and international security agenda. This dissertation explores Swiss foreign policy as an instrument conducive to international cyber stability. However, while cyberspace has developed into a distinct realm of interstate relations, I argue that states’ behavior in that domain is embedded in the existing international order, which is conducive to international peace and stability. My overall conclusion is that, for the time being, no new rules, instruments or policy responses are needed to delineate acceptable interstate behavior. This standpoint reflects the Swiss foreign policy between 2012 and 2017 to the extent that the chosen policy instruments and diplomatic processes in the cyber realm do not differ significantly from those in other Swiss policy areas. I illustrate it via three qualitative case studies, highlighting not only the multilateral venues of Switzerland’s engagement in the cyber realm (i.e. the UN and the OSCE) but also various norms and measures to build both confidence and cyber security capacity.
Random graphs theory has been an important tool to model and solve problems related to real world networks. Although those problems come from very different fields, such as for example social networks, electrical power grids, and Internet network, they share an important common feature such as a very large number of elements. Due to their large and intricate structures those problems have been first studied in terms of their elements (nodes) and connections between those elements (edges).
Very often the problems are so complicated that a complete description of the dynamics happening in the whole network is impossible. Hence there has been given a lot of attention to the local properties of the network such as how many nodes are involved in a process or how to estimate the probability that the elements of the network will interact with each other in order to produce a certain result.
In this thesis we will focus the attention on a particular category of neural network, i.e., a network which mimics the dynamics and the connectivity of neurons in the brain. The nodes of the network are representing neurons, while the edges connecting them are potential synaptic connections. We propose and analyse a random graph model which may predict synaptic formation of a network and formation of connected clusters which communicate with each other. In particular, in the resulted networks the probability of connections depends on the distance.
Thesis in joint-supervision with the University of Lund
During the past four decades, due to miniaturization computing devices have become ubiquitous and pervasive. Today, the number of objects connected to the Internet is increasing at a rapid pace and this trend does not seem to be slowing down. These objects, which can be smartphones, vehicles, or any kind of sensors, generate large amounts of data that are almost always associated with a spatio-temporal context. The amount of this data is often so large that their processing requires the creation of a distributed system, which involves the cooperation of several computers. The ability to process these data is important for society. For example: the data collected during car journeys makes it possible to avoid traffic jams, to know about the need to organize a carpool, or to plan the maintenance interventions to be carried out on the road network. The application domains are therefore numerous, as are the problems associated with them. The articles that make up this thesis deal with systems that share two key characteristics: a spatio-temporal context and a decentralized architecture. In addition, the systems described in these articles revolve around three temporal perspectives: the present, the past, and the future. Systems associated with the present perspective enable a very large number of connected objects to communicate in near real-time, according to a spatial context. Our contributions in this area enable this type of decentralized system to be scaled-out on commodity hardware, i.e., to adapt as the volume of data that arrives in the system increases. Systems associated with the past perspective, often referred to as trajectory indexes, are intended for the access to the large volume of spatio-temporal data collected by connected objects. Our contributions in this area makes it possible to handle particularly dense trajectory datasets, a problem that has not been addressed previously. Finally, systems associated with the future perspective rely on past trajectories to predict the trajectories that the connected objects will follow. Our contributions predict the trajectories followed by connected objects with a previously unmet granularity. Although involving different domains, these contributions open the possibility of being able to deal with these problems more generically in the near future.
Information technologies (IT) have had a massive impact on the capacities of organizations to access and treat information, which have eventually increased their productivity. They have become so integrated in routines that without them, organizations are unable to operate. As an example, in August 2016, Delta Airlines was obliged to cancel almost 2,000 flights because its central system broke down. With the growing capacity of IT, business applications (e.g., enterprise systems) have been supporting increasingly complicated and individual tasks. However, these applications are often chosen based on an organization’s objectives with little consideration for individual needs. They push standardized routines that ask employees to change theirs. This results in a large part of employees being unsatisfied with the way business applications support their activities. With the increasing capacities of mobile applications, many employees have shifted from desktop to mobile applications. Because mobile devices are personal, their applications should be designed to adapt to individual work patterns.
However, despite their popularity and efficiency, very few studies have investigated the designs of mobile applications and their uses in organizational contexts. This dissertation addresses this gap through three interrelated research streams: Research stream 1 investigates the interplay between individual routines and mobile apps as IT artifact. To do so, it looks into the roles that mobile apps play in supporting the ostensive and performative aspects of individual routines and the underlying design of mobile apps’ user interfaces based on two field studies: customer interactive support and routine patient care. Research stream 2 looks into the capacities of mobile checklists, i.e. checklists that are executed on mobile devices, to codify and execute routines. Checklists are a very efficient structure to support individual routines, as described in existing literature and also in the two mobile apps analyzed in the research stream 1. Given the roles of checklists and their frequent uses in mobile applications, I investigate how organizational knowledge is codified and adapted to different contexts as well as how tasks are documented and validated. Research stream 3 seeks to analyze the structures and the components of individual routines in order to describe, assess and improve them. It intends to understand the extent to which activity patterns are structured vs. unstructured and the uses of IT artifacts in these patterns. Thus, it investigates the use of maturity models and process mining to support organizations in analyzing and improving their routines.
To conclude this dissertation, I discuss the application of my contributions in view of an ongoing project involving the use of smart glasses to support individual routines as well as the links between this dissertation and existing research in human-computer interaction.
Over the past decades the electricity supply has been reliable in Switzerland. However, it is uncertain how the electricity supply will evolve in the long-term given the potential changes in the generation-mix in Switzerland, resulting from the nuclear phase-out and the increasing share of PV. The objective of this research is to elaborate on the concept of security of supply in the electricity sector (SoES), and to analyse in particular the case of Switzerland.
We develop a system dynamics model to analyse the impact of these changes on SoES in the long-term. Our results show that with the current regulatory framework, the only investments committed to are those assumed for PV and wind energy until 2035. Consequently, generation adequacy deteriorates progressively and the country becomes a net importer. Given the recent large investments in pumped-storage power plants (PSP), we also analyse how the changes in the Swiss electricity market threaten their profitability. We develop an algorithm to simulate PSP operation and integrate it into our model. Although the changes in the generation-mix lead to higher price differences, the drop of available cheap energy lead to low arbitrage opportunities. As current electricity systems are very complex, the elements in our model are not the only ones affecting the SoES.. Based on a literature review, we develop a framework comprising twelve dimensions, which cover all aspects of long-term SoES. We provide at least one metric for each dimension.
Our overall conclusion is that the security of supply is threatened in Switzerland. In particular, the nuclear phase-out, whatever its timing, will have major effects on prices and on the country’s self-sufficiency. Our framework can be used to monitor the electricity market over time in order to provide insights about the expected evolution of all the aspects of SoES and provide guidance for action.
Sensory information processing is a key process in the brain because it involves many sensory inputs. Some of them are relevant and should induce a motor or cognitive response. In addition, many irrelevant stimuli reach sensory pathway and should be ignored. Synaptic plasticity in the central nervous system is a general process that enhances or decreases sensory responses according to the temporal pattern of stimuli. My main aim is to study synaptic plasticity in the somatosensory pathway, mainly in the thalamo-cortical loop. Sensory information from rodent whiskers is sent from the whisker follicle to the contralateral area of the thalamus and from the thalamus to the barrel cortex (BC). In this Doctoral Thesis we performed extracellular in vivo recordings in the BC and thalamus of urethane anesthetized rats and mice in order to unravel the mechanisms of synaptic plasticity and sensory processing. We observed that repetitive stimulation at frequencies at which the animal explores the environment induced Jong-term potentiation (LTP). In addition, low frequency stimulation could induce LTP or long-term depression (LTD) depending on the intracellular Ca2+ concentration during the stimulation time period. This long-term plasticity depended on NMDA receptors activation and the activation of muscarinic and nicotinic cholinergic receptors. Through an optogenetic study we showed that the basal forebrain (BF), the main source of acetylcholine (Ach) to the neocortex, sent its projections in an organized way. Consequently, the Ach-depending facilitation of cortical responses occurs in a very specific manner. We also found that the postero-medial thalamic nucleus (POM) regulated BC whisker responses through GABAergic (ɣ-aminobutyric-acid: GABA) neurons located in upper cortical layers.
Thesis in joint-supervision with the Universidad Autónoma de Madrid
Full text of the thesis avaiable on Serval: https://serval.unil.ch/notice/serval:BIB_E66BE6750A09
Predictions that information technology (IT) will become a dominant driver in patient care delivery continue to proliferate. While IT’s potential benefits in healthcare are manifold, past research has shown that the digitalization of medicine remains more of a promise than a reality. Given the current limitations of IT in healthcare, in this dissertation, I argue that we need prescriptive design knowledge on how IT artifacts ought to be to function in patient care delivery.
Investigating two dominant IT artifacts in healthcare, electronic health record (EHR) systems and mobile medical apps, this dissertation unpacks the ‘black box’ of IT artifacts and sheds light on the design and the effective use of IT artifacts in routine patient care. The dissertation comprises three interrelated research streams, each taking a specific angle to study the aforementioned aspects. Research stream 1 provides a conceptual framework on the interdependencies between routines in patient care and EHR systems and devises two strategies to objectively assess and improve the effective use of EHR systems. Rather than studying design and use of EHR systems separate from each other, our framework suggests combine the two. Research stream 2 investigates routines at the individual level and describes the affordances of mobile apps to accommodate differences in EHR system use among individual physicians. Research stream 3 centers around the patient and studies the design of medical apps to provide a way for patients to self-diagnose their acute symptoms and to enhance the monitoring of an illness. This research stream presents design principles that effective medical apps should possess in order to engage the patient in the delivery of care. The theoretical contributions can be classified as mid-range theories and inform design practice by being specific about both users (i.e. patients and physicians) and IT artifacts (i.e. EHR systems and medical apps).
Motivated by both economic and environmental reasons, there has been a boost in the number of studies attempting to improve the energy efficiency of data centers (DCs). However, after analyzing the literature related to the improvement of the efficiency of DCs, we spotted several gaps that are assessed in this thesis.
In the first paper, we study how operations management principles apply to DCs running scientific jobs. In particular, we test Little’s law, the law of variability and the law of utilization by using data from three major scientific data centers. Results show that both, Little’s law and the law of utilization, hold, while the law of variability does not. These findings give insights into how DC operations can be improved by applying operations management principles to DCs.
The second paper reviews the most commonly used performance indicators in DCs. We analyze these indicators and find several drawbacks. To overcome these drawbacks, we develop a new performance indicator that has all the characteristics that a normal performance indicator should have according to the literature. The proposed indicator is evaluated using data from three DCs and by controlled lab-tests.
In the third paper, a two-step method to model and forecast the Load-at-Risk of computing systems is proposed. Data from a Finnish company’s computing system is used to validate the method developed. Results show that the two-step method successfully models and forecasts the Load-at-Risk of computing systems. This provides operators of computing systems with better information regarding how high the workload of the system could be with a significance level.
To sum up, this research provides means to assist the managers and operators of computing systems to improve the efficiency at which their computing resources are operated.
This thesis is devoted to the study of non-Borel ∆12 pointclasses of the Baire space, using reductions by continuous functions. This work is divided in three main parts. In the first one, we generalise results obtained by Duparc and Louveau to provide a complete description of the Wadge hierarchy of the class of increasing di˙erences of coanalytic sets, under some determinacy hypothesis. In a second part, we study some ∆12 pointclasses above the class of increasing di˙erences of coanalytic sets, and give a fragment of the Wadge hierarchy for those classes. Finally, we apply our results and techniques to theoretical computer science and more precisely to the study of regular tree languages, that is sets of labeled binary trees that are recognized by tree automata.
Thesis in co-supervision with the Université Paris-Diderot (Paris 7)
In this thesis, we develop tools to study the influence of predictors on multivariate distributions. We tackle the issue of conditional dependence modeling using generalized additive models, a natural extension of linear and generalized linear models allowing for smooth functions of the covariates. Compared to existing methods, the framework that we develop has two main advantages. First, it is completely flexible, in the sense that the dependence structure can vary with an arbitrary set of covariates in a parametric, nonparametric or semiparametric way. Second, it is both quick and numerically stable, which means that it is suitable for exploratory data analysis and stepwise model building. Starting from the bivariate case, we extend our framework to pair-copula constructions, and open new possibilities for further applied and methodological work. Our regression-like theory of the dependence, being built on conditional copulas and generalized additive models, is at the same time theoretically sound and practically useful.
The ubiquity of mobile devices and particularly smartphones has caused the emergence of a new trend of distributed applications known as Proximity-Based Mobile (PBM) applications. These applications enable a user to interact with others in a defined range and for a certain time duration for different purposes such as social networking, dating, gaming and driving. The goal of this thesis is to introduce a set of programming abstractions and algorithms that can be used for building PBM applications in a category of mobile networks, called mobile ad hoc networks (MANETs). In fact, the characteristics of MANETs make them a promising technology to enable PBM applications. However, the existing abstractions and algorithms in the literature of MANETs are not fully adequate for building PBM applications. Thus, in this thesis we define proximity-based durable broadcast and proximity-based neighbor detection as the main requirements of PBM applications. Then, in each part of the thesis, we introduce abstractions and algorithms which address one of these requirements.
Cloud computing and its three facets (Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS)) are terms that denote new developments in the software industry. In particular, PaaS solutions, also referred to as cloud platforms, are changing the way software is being produced, distributed, consumed, and priced. Software vendors have started considering cloud platforms as a strategic option but are battling to redefine their offerings to embrace PaaS. In contrast to SaaS and IaaS, PaaS allows for value co-creation with partners to develop complementary components and applications. It thus requires multisided business models that bring together two or more distinct customer segments. Understanding how to design PaaS business models to establish a flourishing ecosystem is crucial for software vendors. This doctoral thesis aims to address this issue in three interrelated research parts. First, based on case study research, the thesis provides a deeper understanding of current PaaS business models and their evolution. Second, it analyses and simulates consumers’ preferences regarding PaaS business models, using a conjoint approach to find out what determines the choice of cloud platforms. Finally, building on the previous research outcomes, the third part introduces a design theory for the emerging class of PaaS business models, which is grounded on an extensive action design research study with a large European software vendor. Understanding PaaS business models from a market as well as a consumer perspective will, together with the design theory, inform and guide decision makers in their business model innovation plans. It also closes gaps in the research related to PaaS business model design and more generally related to platform business models.
This work investigates for the most part cooperation dilemmas in society when the population is structured as a complex network or when agents lay in space and can migrate. Cyclic games are also investigated in the framework of migration. Cooperation is modeled by two-player games where two players can choose between two available strategies which are cooperation and defection. Among other games, we study the prisoner’s dilemma. In that game mutual cooperation is the best choice but the structure of the game leads selfish agents to both defect. This is due to the fact that the temptation to defect is strong and the so called sucker payoff earned by a cooperator against a defector is very low. Using this game and others, we first study the evolution of cooperation on weighted networks and on spatial networks. Then we study the evolution of cooperation when the players can migrate in space in order to improve their payoffs. We find that when the weights are attributed according to some degree-weight correlations on a social network the cooperation can be strongly improved. In a second part we show that particular spatial hierarchical topologies which are embedded in space lead to particularly high levels of cooperation. In a third part, exploring migration, we find that when agents imitate their neighbors randomly while they migrate opportunistically, cooperation spreads in the population.
A large portion of the Internet traffic today is due to media streaming and this trend is still growing, as testified by the success of services like Skype, Spotify and Netflix. Media streaming consists in sending video or audio content in a continuous flow of data over the Internet and in playing this content at its arrival. Since computing resources such as bandwidth, memory and processing are limited, delivering multimedia content in a scalable manner is a key challenge. This PhD thesis addresses the issue of scalable media streaming in large-scale networks.
The client-server model is a common approach to streaming, where media consumers (clients) establishes a connection with a media server, somewhere on the Internet. In this model, when the number of consumers increases, more dedicated servers must be added to the system, which tends to be expensive. The peer-to-peer (P2P) approach offers an alternative and naturally scalable solution, where each peer can act as both client and server. Most of the proposed P2P streaming solutions focus on routing to achieve scalability. However, routing alone is limited when resources are insufficient, which is where replication can help.
In this thesis, we propose a family of replication-based streaming protocols. Our first two protocols, named ScaleStream and ReStream, adaptively replicate media content in different peers, based on the demand in the neighborhood of each peer, in order to increase the number of consumers that can be served in parallel. These solutions are adaptive in the sense that they take into account resources constraints like bandwidth capacity of peers, in order to decide when to add or remove replicas. Our two last protocols, named EagleMacaw and TurboStream, are also replication-based but they in addition optimize media routing to improve efficiency and reliability, and to reduce latency.
L’évolution de l’environnement économique, des chaînes de valeur et des modèles d’affaires des organisations augmentent l’importance de la coordination, qui peut être définie comme la gestion des interdépendances entre des tâches réalisées par des acteurs différents et concourants à un objectif commun. De nombreux moyens sont mis en œuvre au sein des organisations pour gérer ces interdépendances. A cet égard, les activités de coordination bénéficient massivement de l’appui des technologies de l’information et de communication (TIC) qui sont désormais disséminées, intégrées et connectées sous de multiples formes tant dans l’environnement privé que professionnel. Dans ce travail, nous avons investigué la question de recherche suivante : comment l’ubiquité et l’interconnectivité des TIC modifient-elles les modes de coordination ?
A travers quatre études en systèmes d’information conduites selon une méthodologie design science, nous avons traité cette question à deux niveaux : celui de l’alignement stratégique entre les affaires et les systèmes d’information, où la coordination porte sur les interdépendances entre les activités ; et celui de la réalisation des activités, où la coordination porte sur les interdépendances des interactions individuelles. Au niveau stratégique, nous observons que l’ubiquité et l’interconnectivité permettent de transposer des mécanismes de coordination d’un domaine à un autre. En facilitant différentes formes de coprésence et de visibilité, elles augmentent aussi la proximité dans les situations de coordination asynchrone ou distante. Au niveau des activités, les TIC présentent un très fort potentiel de participation et de proximité pour les acteurs. De telles technologies leur donnent la possibilité d’établir les responsabilités, d’améliorer leur compréhension commune et de prévoir le déroulement et l’intégration des tâches.
La contribution principale qui émerge de ces quatre études est que les praticiens peuvent utiliser l’ubiquité et l’interconnectivité des TIC pour permettre aux individus de communiquer et d’ajuster leurs actions pour définir, atteindre et redéfinir les objectifs du travail commun.
Cooperation and coordination are desirable behaviors that are fundamental for the harmonious development of society. People need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. However, cooperation may easily fall prey to exploitation by selfish individuals who only care about short-term gain. For cooperation to evolve, specific conditions and mechanisms are required, such as kinship, direct and indirect reciprocity through repeated interactions, or external interventions such as punishment.
In this dissertation we investigate the effect of the network structure of the population on the evolution of cooperation and coordination. We consider several kinds of static and dynamical network topologies, such as Barabási-Albert, social network models and spatial networks. We perform numerical simulations and laboratory experiments using the Prisoner's Dilemma and coordination games in order to contrast human behavior with theoretical results.
Thesis in joint-supervision with the University of Madrid (Carlos III)
Electricity is a strategic service in modern societies. Thus, it is extremely important for governments to be able to guarantee an affordable and reliable supply, which depends to a great extent on an adequate expansion of the generation and transmission capacities. Cross-border integration of electricity markets creates new challenges for the regulators, since the evolution of the market is now influenced by the characteristics and policies of neighbouring countries.
There is still no agreement on why and how regions should integrate their electricity markets. The aim of this thesis is to improve the understanding of integrated electricity markets and how their behaviour depends on the prevailing characteristics of the national markets and the policies implemented in each country.
We developed a simulation model to analyse under what circumstances integration is desirable. This model is used to study three cases of interconnection between two countries. Several policies regarding interconnection expansion and operation, combined with different generation capacity adequacy mechanisms, are evaluated.
The thesis is composed of three papers. In general, we conclude that electricity market integration can bring benefits if the right policies are implemented. However, a large interconnection capacity is only desirable if the countries exhibit significant complementarity and trust each other. The outcomes of policies aimed at guaranteeing security of supply at a national level can be quite counterintuitive due to the interactions between neighbouring countries and their effects on interconnection and generation investments.
Thus, it is important for regulators to understand these interactions and coordinate their decisions in order to take advantage of the interconnection without putting security of supply at risk. But it must be taken into account that even when integration brings benefits to the region, some market participants lose and might try to hinder the integration process.
This thesis deals with combinatorics, order theory and descriptive set theory.
The first contribution is to the theory of well-quasi-orders (wqo) and better-quasi-orders (bqo). The main result is the proof of a conjecture made by Maurice Pouzet in 1978 his thèse d’état which states that any wqo whose ideal completion remainder is bqo is actually bqo. Our proof relies on new results with both a combinatorial and a topological flavour concerning maps from a front into a compact metric space. The second contribution is of a more applied nature and deals with topological spaces. We define a quasi-order on the subsets of every second countable T0 topological space in a way that generalises the Wadge quasi-order on the Baire space, while extending its nice properties to virtually all these topological spaces.
The Wadge quasi-order of reducibility by continuous functions is wqo on Borel subsets of the Baire space, this quasi-order is however far less satisfactory for other important topological spaces such as the real line, as Hertling, Ikegami and Schlicht notably ob-served. Some authors have therefore studied reducibility with respect to some classes of discontinuous functions to remedy this situation. We propose instead to keep continuity but to weaken the notion of function to that of relation. Using the notion of admissible representation studied in Type-2 theory of effectivity, we define the quasi-order of re-ducibility by relatively continuous relations. We show that this quasi-order both refines the classical hierarchies of complexity and is wqo on the Borel subsets of virtually every second countable T0 space – including every (quasi-)Polish space.
Thesis in joint-supervision with the Université Paris-Diderot (Paris 7)
Dans cette thèse, nous étudions les évolutions des systèmes d’information. Nous nous intéressons plus particulièrement à l’étude des facteurs déclencheurs d’évolution, ce qu’ils représentent et comment ils permettent d’en apprendre d’avantage sur le cycle de vie des systèmes d’information.
Pour ce faire, nous avons développé un cadre conceptuel pour l’étude des évolutions qui tient compte non seulement des facteurs déclencheurs d’évolution, mais également de la nature des activités entreprises pour évoluer. Nous avons suivi une approche Design Science pour la conception de ce cadre conceptuel. Selon cette approche, nous avons développé itérativement le cadre conceptuel en l’instanciant puis en l’évaluant afin de raffiner sa conception. Ceci nous a permis de faire plusieurs contributions tant pratiques que théoriques.
La première contribution théorique de cette recherche est l’identification de 4 facteurs principaux déclenchant les évolutions. Ces facteurs sont des éléments issus de domaines généralement étudiés séparément. Le cadre conceptuel les rassemble dans un même outil pour l’étude des évolutions. Une autre contribution théorique est l’étude du cycle de vie des systèmes selon ces facteurs. En effet, l’utilisation répétée du cadre conceptuel pour la qualification des évolutions met en lumière les principales motivations des évolutions lors de chaque étape du cycle de vie. En comparant les évolutions de plusieurs systèmes, il devient possible de mettre en évidence des modèles spécifiques d’évolution des systèmes.
Concernant les contributions pratiques, la principale concerne le pilotage de l’évolution. Pour un gestionnaire de système d’information, l’application du cadre conceptuel permet de connaître précisément l’allocation réelle des ressources pour une évolution ainsi que la localisation du système dans son cycle de vie. Le cadre conceptuel peut donc aider les gestionnaires dans la planification et la stratégie d’évolution du système. Les modèles d’évolution, identifiés suite à l’application du cadre conceptuel, sont également une aide précieuse pour définir la stratégie de pilotage et les activités à entreprendre lors de la planification des évolutions.
Finalement, le cadre conceptuel a fourni les bases nécessaires à l’élaboration d’un tableau de bord pour le suivi du cycle de vie et le pilotage de l’évolution des systèmes d’information.
Post-industrial societies depend on efficiency and sustainability of their industrial production which is controlled by specialized industrial automation computer systems. Industrial automation is dominated by global companies with proprietary solutions and relies on technologies largely replaced in other computer markets. Companies operating in this mature market constantly improve their operations and manage technology disruptions. Related decisions are based on a combination of facts, emotions and personal agendas. We study three trends in industrial automation with two research projects observing competitiveness improvements through outsourcing and process improvement and one exploring the management outlook concerning rapid technology developments in adjacent high volume markets.
Globalization moves industrial facilities between continents and creates larger units. Responding to the changing environment requires companies to focus on core competences, process improvements and customer experience. Core competence focus leads to outsourcing or insourcing of selected activities. We research captive outsourcing, a novel outsourcing model, where the outsourced unit located in an emerging country is an integral part of the company’s operations, not an external supplier. It is not merely a low cost engineering pool but has responsibility for complete subsystems. Since employee commitment and low attrition rate are key for success of this model, the company focuses on employee satisfaction and develop a brand as an good local employer. Next we research support process improvement from the customer perspective and implement a new support process based on an end-to-end lead-time measurement system for reduction and faster resolution of customer issues.
New System-on-Chip technologies are disrupting Information and Communications Technology (ICT) markets. Shipment volumes of smartphones and tablets exceed all earlier computing technologies. As the last trend we research management views on the future from the perspectives of customers, incumbents and newcomers. To benefit from the new technologies a disruption management function is considered necessary and a new quantitative model for disruption assessment is proposed.
The emergence of powerful new technologies, the existence of large quantities of data, and increasing demands for the extraction of added value from these technologies and data have created a number of significant challenges for those charged with both corporate and information technology management. The possibilities are great, the expectations high, and the risks significant. Organisations seeking to employ cloud technologies and exploit the value of the data to which they have access, be this in the form of “Big Data” available from different external sources or data held within the organisation, in structured or unstructured formats, need to understand the risks involved in such activities. Data owners have responsibilities towards the subjects of the data and must also, frequently, demonstrate that they are in compliance with current standards, laws and regulations.
This thesis sets out to explore the nature of the technologies that organisations might utilise, identify the most pertinent constraints and risks, and propose a framework for the management of data from discovery to external hosting that will allow the most significant risks to be managed through the definition, implementation, and performance of appropriate internal control activities.
"Contribution of systemic science in the improvement of understanding of risk management system" offers a holistic view of enterprise wise risk management.
Risk management is often assessed through linear methods which stress positioning and causal logical frameworks: to such events correspond such consequences and such risks accordingly. Consideration of the interrelationships between risks is often overlooked and risks are rarely analyzed in their dynamic and nonlinear components.
This work shows what systemic methods, including the study of complex systems, are likely to bring to knowledge, management, anticipation of business risks, both on the conceptual and the practical sides. Based on the definitions of systems and risks in various areas, as well as methods used to manage risk, this work confronts these concepts with approaches of complex systems analysis and modeling.
This work highlights the reducing effects of some business risk analysis methods as well as limitations of risk universes caused in particular by unsuitable definitions. As a result this work also provides chief officers with a range of different tools and approaches which allows them a better understanding of complexity and as such a gain in efficiency in their risk management practices. It results in a better fit between strategy and risk management. Ultimately the firm gains in its maturity of risk management.
We are currently witnessing a distribution of Information and Communication Technologies (ICT) on a global scale. Yet, this distribution is carried out in different rhythms within each nation (and even among regions in a given country), which creates a “digital” gap, in addition to multiple inequalities already present. This computing and technological revolution engenders many changes in social relationships and permits numerous applications that are destined to simplify our lives.
Amine Bekkouche takes a closer look at the issue of e-government as an important consequence of ICTs, following the example of electronic commerce. First, he presents a synthesis of the main concepts in e-government as well as a panoramic view of the global situation in this domain.
Subsequently, he studies e-government in view of emerging countries, in particular through the illustration of a country in representative development. Then, he offers concrete solutions, which take the education sector as their starting point, to allow for a “computed digitalisation” of society that contribute to reduce the digital gap. Thereafter, he broadens these proposals to other domains and formulates recommendations that help their implementation. Finally, he concludes with perspectives that may constitute further research tracks and enable the elaboration of development projects, through the appropriation of ICTs, in order to improve the condition of the administered, and more generally, that of the citizen.
Many everyday life problems involve finding an optimal solution among a finite set of possibilities, deemed the problem search space. In practice, enumerating all the possibilities becomes infeasible beyond a given problem size, but there exist approximate methods. In the most general case, these methods start with a candidate solution and gradually refine it through partial modifications until no improvement is possible. The variation operation, by connecting candidate solutions, induces a neighborhood structure in the search space, such that the search process can be described as a trajectory over this configuration space. Heuristic methods try to guide the search towards better solutions. Their performance, therefore, depends on the structure of the space being searched.
In this thesis, we analyze such structure by looking at the graph having as nodes solutions that are locally optimal and that act as attractors to the search trajectory, and as edges the possible transitions between those local optima. This allows us to employ methods from the science of complex networks in order to characterize in a novel way the search space of hard combinatorial problems; we argue that such network characterization can advance our understanding of the structural and dynamical properties of these spaces.
We investigate several methodologies to build the network of local optima and we apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and help to predict problem hardness as measured from the performances of trajectory-based search heuristics.
Enterprise-wide architecture has become a necessity for organizations to (re)align information technology (IT) to changing business requirements. Since a city planning metaphor inspired enterprise-wide architecture, this dissertation’s research axes can be outlined by similarities between cities and enterprises. Both are characterized as dynamic super-systems that need to address the evolving interest of various architecture stakeholders. Further, both should simultaneously adhere to a set of principles to guide the evolution of architecture towards the expected benefits. The extant literature on enterprise-wide architecture not only disregards architecture adoption’s complexities but also remains vague about how principles guide architecture evolution. To bridge this gap, this dissertation contains three interrelated research streams examining the principles and adoption of enterprise-wide architecture.
The first research stream investigates organizational intricacies inherent in architecture adoption. It characterizes architecture adoption as an ongoing organizational adaptation process. By analyzing organizational response behaviors in this adaptation process, it also identifies four archetypes that represent very diverse architecture approaches. The second research stream ontologically clarifies the nature of architecture principles along with outlining new avenues for theoretical contributions. This research stream also provides an empirically validated set of principles and proposes a research model illustrating how principles can be applied to generate expected architecture benefits. The third research stream examines architecture adoption in multinational corporations (MNCs). MNCs are specified by unique organizational characteristics that constantly strive for balancing global integration and local responsiveness. This research stream characterizes MNCs’ architecture adoption as a continuous endeavor. This endeavor tries to constantly synchronize architecture with stakeholders’ beliefs about how to balance global integration and local responsiveness.
To conclude, this dissertation provides a thorough explanation of a long-term journey in which organizations learn over time to adopt an effective architecture approach. It also clarifies the role of principles to purposefully guide the aforementioned learning process.
There is a lack of dedicated tools for business model design at a strategic level. However, in today’s economic world the need to be able to quickly reinvent a company’s business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts:
First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity.
The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched.
The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition.
The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.
While mobile technologies can provide great personalized services for mobile users, they also threaten their privacy. Such personalization-privacy paradox are particularly salient for context aware technology based mobile applications where user’s behaviors, movement and habits can be associated with a consumer’s personal identity.
In this thesis, I studied the privacy issues in the mobile context, particularly focus on an adaptive privacy management system design for context-aware mobile devices, and explore the role of personalization and control over user’s personal data. This allowed me to make multiple contributions, both theoretical and practical. In the theoretical world, I propose and prototype an adaptive Single-Sign On solution that use user’s context information to protect user’s private information for smartphone. To validate this solution, I first proved that user’s context is a unique user identifier and context awareness technology can increase user’s perceived ease of use of the system and service provider’s authentication security. I then followed a design science research paradigm and implemented this solution into a mobile application called “Privacy Manager”. I evaluated the utility by several focus group interviews, and overall the proposed solution fulfilled the expected function and users expressed their intentions to use this application. To better understand the personalization-privacy paradox, I built on the theoretical foundations of privacy calculus and technology acceptance model to conceptualize the theory of users’ mobile privacy management. I also examined the role of personalization and control ability on my model and how these two elements interact with privacy calculus and mobile technology model. In the practical realm, this thesis contributes to the understanding of the tradeoff between the benefit of personalized services and user’s privacy concerns it may cause. By pointing out new opportunities to rethink how user’s context information can protect private data, it also suggests new elements for privacy related business models.
Games are powerful and engaging. On average, one billion people spend at least 1 hour a day playing computer and videogames. This is even more true with the younger generations. Our students have become the « digital natives », the « gamers », the « virtual generation ». Research shows that those who are most at risk for failure in the traditional classroom setting, also spend more time than their counterparts, using video games. They might strive, given a different learning environment.
Educators have the responsibility to align their teaching style to these younger generation learning styles. However, many academics resist the use of computer-assisted learning that has been “created elsewhere”. This can be extrapolated to game-based teaching: even if educational games were more widely authored, their adoption would still be limited to the educators who feel a match between the authored games and their own beliefs and practices. Consequently, game-based teaching would be much more widespread if teachers could develop their own games, or at least customize them. Yet, the development and customization of teaching games are complex and costly.
This research uses a design science methodology, leveraging gamification techniques, active and cooperative learning theories, as well as immersive sandbox 3D virtual worlds, to develop a method which allows management instructors to transform any off-the-shelf case study into an engaging collaborative gamified experience. This method is applied to marketing case studies, and uses the sandbox virtual world of Second Life.
There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies.
Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found.
Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.
A mobile ad hoc network (MANET) is a decentralized and infrastructure-less network. This thesis aims to provide support at the system-level for developers of applications or protocols in such networks. To do this, we propose contributions in both the algorithmic realm and in the practical realm. In the algorithmic realm, we contribute to the field by proposing different context-aware broadcast and multicast algorithms in MANETs, namely six-shot broadcast, six-shot multicast, PLAN-B and ageneric algorithmic approach to optimize the power consumption of existing algorithms. For each algorithm we propose, we compare it to existing algorithms that are either probabilistic or context-aware, and then we evaluate their performance based on simulations. We demonstrate that in some cases, context-aware information, such as location or signal-strength, can improve the effciency. In the practical realm, we propose a testbed framework, namely ManetLab, to implement and to deploy MANET-specific protocols, and to evaluate their performance. This testbed framework aims to increase the accuracy of performance evaluation compared to simulations, while keeping the ease of use offered by the simulators to reproduce a performance evaluation. By evaluating the performance of different probabilistic algorithms with ManetLab, we observe that both simulations and testbeds should be used in a complementary way. In addition to the above original contributions, we also provide two surveys about system-level support for ad hoc communications in order to establish a state of the art. The first is about existing broadcast algorithms and the second is about existing middleware solutions and the way they deal with privacy and especially with location privacy.
Thesis in joint-supervision with the Université Paris-Diderot
Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service.
Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues.
In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field.
Digitalization gives to the Internet the power by allowing several virtual representations of reality, including that of identity. We leave an increasingly digital footprint in cyberspace and this situation puts our identity at high risks. Privacy is a right and fundamental social value that could play a key role as a medium to secure digital identities. Identity functionality is increasingly delivered as sets of services, rather than monolithic applications. So, an identity layer in which identity and privacy management services are loosely coupled, publicly hosted and available to on-demand calls could be more realistic and an acceptable situation. Identity and privacy should be interoperable and distributed through the adoption of service-orientation and implementation based on open standards (technical interoperability). Ihe objective of this project is to provide a way to implement interoperable user-centric digital identity-related privacy to respond to the need of distributed nature of federated identity systems. It is recognized that technical initiatives, emerging standards and protocols are not enough to guarantee resolution for the concerns surrounding a multi-facets and complex issue of identity and privacy. For this reason they should be apprehended within a global perspective through an integrated and a multidisciplinary approach. The approach dictates that privacy law, policies, regulations and technologies are to be crafted together from the start, rather than attaching it to digital identity after the fact. Thus, we draw Digital Identity-Related Privacy (DigldeRP) requirements from global, domestic and business-specific privacy policies. The requirements take shape of business interoperability. We suggest a layered implementation framework (DigldeRP framework) in accordance to model-driven architecture (MDA) approach that would help organizations' security team to turn business interoperability into technical interoperability in the form of a set of services that could accommodate Service-Oriented Architecture (SOA): Privacy-as-a-set-of- services (PaaSS) system. DigldeRP Framework will serve as a basis for vital understanding between business management and technical managers on digital identity related privacy initiatives. The layered DigldeRP framework presents five practical layers as an ordered sequence as a basis of DigldeRP project roadmap, however, in practice, there is an iterative process to assure that each layer supports effectively and enforces requirements of the adjacent ones. Each layer is composed by a set of blocks, which determine a roadmap that security team could follow to successfully implement PaaSS. Several blocks' descriptions are based on OMG SoaML modeling language and BPMN processes description. We identified, designed and implemented seven services that form PaaSS and described their consumption. PaaSS Java QEE project), WSDL, and XSD codes are given and explained.
The coverage and volume of geo-referenced datasets are extensive and incessantly growing. The systematic capture of geo-referenced information generates large volumes of spatio-temporal data to be analyzed. Clustering and visualization play a key role in the exploratory data analysis and the extraction of knowledge embedded in these data. However, new challenges in visualization and clustering are posed when dealing with the special characteristics of this data. For instance, its complex structures, large quantity of samples, variables involved in a temporal context, high dimensionality and large variability in cluster shapes. The central aim of my thesis is to propose new algorithms and methodologies for clustering and visualization, in order to assist the knowledge extraction from spatiotemporal geo-referenced data, thus improving making decision processes. I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis: the Tree-structured Self-organizing Maps Component Planes. In addition, I present methodologies that combined with FGHSON and the Tree-structured SOM Component Planes allow the integration of space and time seamlessly and simultaneously in order to extract knowledge embedded in a temporal context. The originality of the FGHSON lies in its capability to reflect the underlying structure of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of clusters is crucial when data include complex structures with large variability of cluster shapes, variances, densities and number of clusters. The most important characteristics of the FGHSON include: (1) It does not require an a-priori setup of the number of clusters. (2) The algorithm executes several self-organizing processes in parallel. Hence, when dealing with large datasets the processes can be distributed reducing the computational cost. (3) Only three parameters are necessary to set up the algorithm. In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm lies in its ability to create a structure that allows the visual exploratory data analysis of large high-dimensional datasets. This algorithm creates a hierarchical structure of Self-Organizing Map Component Planes, arranging similar variables' projections in the same branches of the tree. Hence, similarities on variables' behavior can be easily detected (e.g. local correlations, maximal and minimal values and outliers). Both FGHSON and the Tree-structured SOM Component Planes were applied in several agroecological problems proving to be very efficient in the exploratory analysis and clustering of spatio-temporal datasets. In this thesis I also tested three soft competitive learning algorithms. Two of them well-known non supervised soft competitive algorithms, namely the Self-Organizing Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the third was our original contribution, the FGHSON. Although the algorithms presented here have been used in several areas, to my knowledge there is not any work applying and comparing the performance of those techniques when dealing with spatiotemporal geospatial data, as it is presented in this thesis. I propose original methodologies to explore spatio-temporal geo-referenced datasets through time. Our approach uses time windows to capture temporal similarities and variations by using the FGHSON clustering algorithm. The developed methodologies are used in two case studies. In the first, the objective was to find similar agroecozones through time and in the second one it was to find similar environmental patterns shifted in time. Several results presented in this thesis have led to new contributions to agroecological knowledge, for instance, in sugar cane, and blackberry production. Finally, in the framework of this thesis we developed several software tools: (1) a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user interface tool which integrates the FGHSON algorithm with Google Earth in order to show zones with similar agroecological characteristics.
This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc.
While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively.
To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer.
Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner.
To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms.
To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system.
At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Thesis in joint-supervision with the University of Maastricht
Désormais, la lutte contre le blanchiment d'argent constitue une priorité pour les Etats et les gouvernements dans le but, d'une part, de préserver l'économie et l'intégrité des places financières et, d'autre part, de priver les organisations criminelles des ressources financières. Dans ce contexte, la préoccupation majeure des autorités algériennes en charge de la lutte contre ce phénomène est de mettre en place un dispositif capable de détecter les mécanismes de blanchiment, d'en évaluer la menace et sur la base de cette connaissance, de définir et de déployer les moyens de riposte les plus efficaces et efficients. Mais nous constatons que mener des enquêtes de blanchiment en conséquence à un crime sous-jacent a montré ses limites en matière d'établissement de preuves, d'élucidation d'affaires et de recouvrement des avoirs. Par ailleurs, nous pensons qu'il serait plus judicieux de mettre en place en amont un contrôle «systématique» des flux financiers et des opérations inhabituelles et/ou suspectes et de là, identifier d'éventuelles opérations de blanchiment, sans forcément connaître le crime initial, en veillant au maintien de l'équilibre entre le «tout sécuritaire» orienté vers la surveillance accrue des flux et la préservation de la vie privée et des libertés individuelles.
Notre thèse apporte un regard critique sur le dispositif actuel de lutte contre le blanchiment existant en Algérie que nous évaluons et sur lequel nous relevons plusieurs lacunes. Pour répondre aux problèmes identifiés nous proposons des solutions stratégiques, organisationnelles, méthodologiques et technologiques intégrées dans un cadre opérationnel cohérent au niveau national et international.
Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link.
This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective.
Ubiquitous Computing is the emerging trend in computing systems. Based on this observation this thesis proposes an analysis of the hardware and environmental constraints that rule pervasive platforms. These constraints have a strong impact on the programming of such platforms. Therefore solutions are proposed to facilitate this programming both at the platform and node levels.
The first contribution presented in this document proposes a combination of agentoriented programming with the principles of bio-inspiration (Phylogenesys, Ontogenesys and Epigenesys) to program pervasive platforms such as the PERvasive computing framework for modeling comPLEX virtually Unbounded Systems platform.
The second contribution proposes a method to program efficiently parallelizable applications on each computing node of this platform.
Thesis in joint-supervision with the Université de Montpellier
Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts.
Thesis in joint-supervision with the University of Turin
In this thesis, we study the use of prediction markets for technology assessment. We particularly focus on their ability to assess complex issues, the design constraints required for such applications and their efficacy compared to traditional techniques. To achieve this, we followed a design science research paradigm, iteratively developing, instantiating, evaluating and refining the design of our artifacts. This allowed us to make multiple contributions, both practical and theoretical.
We first showed that prediction markets are adequate for properly assessing complex issues. We also developed a typology of design factors and design propositions for using these markets in a technology assessment context. Then, we showed that they are able to solve some issues related to the R&D portfolio management process and we proposed a roadmap for their implementation. Finally, by comparing the instantiation and the results of a multi-criteria decision method and a prediction market, we showed that the latter are more efficient, while offering similar results. We also proposed a framework for comparing forecasting methods, to identify the constraints based on contingency factors. In conclusion, our research opens a new field of application of prediction markets and should help hasten their adoption by enterprises.
The use of the Internet as a shopping and purchasing medium has seen exceptional growth. However, 99% of new online businesses fail. Most online buyers do not comeback for a repurchase, and 60% abandon their shopping cart before checkout. Indeed, after the first purchase, online consumer retention becomes critical to the success of the e-commerce vendor. Retaining existing customers can save costs, increase profits, and is a means of gaining competitive advantage.
Past research identified loyalty as the most important factor in achieving customer retention, and commitment as one of the most important factors in relationship marketing, providing a good description of what type of thinking leads to loyalty. Yet, we could not find an e-commerce study investing the impact of both online loyalty and online commitment on online repurchase. One of the advantages of online shopping is the ability of browsing for the best price with one click. Yet, we could not find an e- commerce empirical research investigating the impact of post-purchase price perception on online repurchase.The objective of this research is to develop a theoretical model aimed at understanding online repurchase, or purchase continuance from the same online store.
Our model was tested in a real e-commerce context with an overall sample of 1, 866 real online buyers from the same online store.The study focuses on repurchase. Therefore, randomly selected respondents had purchased from the online store at least once prior to the survey. Five months later, we tracked respondents to see if they actually came back for a repurchase.
Our findings show that online Intention to repurchase has a non-significant impact on online Repurchase. Online post-purchase Price perception and online Normative Commitment have a non-significant impact on online Intention to repurchase, whereas online Affective Commitment, online Attitudinal Loyalty, online Behavioral Loyalty, and online Calculative Commitment have a positive impact on online Intention to repurchase. Furthermore, online Attitudinal Loyalty partially mediates between online Affective Commitment and online Intention to repurchase, and online Behavioral Loyalty partially mediates between online Attitudinal Loyalty and online Intention to repurchase.
We conducted two follow up analyses: 1) On a sample of first time buyers, we find that online post-purchase Price perception has a positive impact on Intention. 2) We divided the main study's sample into Swiss-French and Swiss-German repeated buyers. Results show that Swiss-French show more emotions when shopping online than Swiss- Germans. Our findings contribute to academic research but also to practice.
Game theory is a branch of applied mathematics used to analyze situation where two or more agents are interacting. Originally it was developed as a model for conflicts and collaborations between rational and intelligent individuals. Now it finds applications in social sciences, eco- nomics, biology (particularly evolutionary biology and ecology), engineering, political science, international relations, computer science, and philosophy. Networks are an abstract representation of interactions, dependencies or relationships. Net- works are extensively used in all the fields mentioned above and in many more. Many useful informations about a system can be discovered by analyzing the current state of a network representation of such system. In this work we will apply some of the methods of game theory to populations of agents that are interconnected. A population is in fact represented by a network of players where one can only interact with another if there is a connection between them. In the first part of this work we will show that the structure of the underlying network has a strong influence on the strategies that the players will decide to adopt to maximize their utility. We will then introduce a supplementary degree of freedom by allowing the structure of the population to be modified along the simulations. This modification allows the players to modify the structure of their environment to optimize the utility that they can obtain.
Le μ-calcul est une extension de la logique modale par des opérateurs de point fixe. Dans ce travail nous étudions la complexité de certains fragments de cette logique selon deux points de vue, différents mais étroitement liés: l'un syntaxique (ou combinatoire) et l'autre topologique. Du point de vue syn¬taxique, les propriétés définissables dans ce formalisme sont classifiées selon la complexité combinatoire des formules de cette logique, c'est-à-dire selon le nombre d'alternances des opérateurs de point fixe. Comparer deux ensembles de modèles revient ainsi à comparer la complexité syntaxique des formules as¬sociées. Du point de vue topologique, les propriétés définissables dans cette logique sont comparées à l'aide de réductions continues ou selon leurs positions dans la hiérarchie de Borel ou dans celle projective.
Thesis in joint-supervision with the Université Bordeaux-I, France
Le partage et la réutilisation d'objets d'apprentissage est encore une utopie. La mise en commun de documents pédagogiques et leur adaptation à différents contextes ont fait l'objet de très nombreux travaux. L'un des aspects qui fait problème concerne leur description qui se doit d'être aussi précise que possible afin d'en faciliter la gestion et plus spécifiquement un accès ciblé. Cette description s'effectue généralement par l'instanciation d'un ensemble de descripteurs standardisés ou métadonnées (LOM, ARIADNE, DC, etc). Force est de constater que malgré l'existence de ces standards, dont certains sont relativement peu contraignants, peu de pédagogues ou d'auteurs se prêtent à cet exercice qui reste lourd et peu gratifiant. Nous sommes partis de l'idée que si l'indexation pouvait être réalisée automatiquement avec un bon degré d'exactitude, une partie de la solution serait trouvée. Pour ce, nous nous sommes tout d'abord penché sur l'analyse des facteurs bloquants de la génération manuelle effectuée par les ingénieurs pédagogiques de l'Université de Lausanne. La complexité de ces facteurs (humains et techniques) nous a conforté dans l'idée que la génération automatique de métadonnées était bien de nature à contourner les difficultés identifiées. Nous avons donc développé une application de génération automatique de métadonnées laquelle se focalise sur le contenu comme source unique d'extraction. Une analyse en profondeur des résultats obtenus, nous a permis de constater que : - Pour les documents non structurés : notre application présente des résultats satisfaisants en se basant sur les indicateurs de mesure de qualité des métadonnées (complétude, précision, consistance logique et cohérence). - Pour des documents structurés : la génération automatique s'est révélée peu satisfaisante dans la mesure où elle ne permet pas d'exploiter les éléments sémantiques (structure, annotations) qu'ils contiennent. Et dans ce cadre nous avons pensé qu'il était possible de faire mieux. C'est ainsi que nous avons poursuivi nos travaux afin de proposer une deuxième application tirant profit du potentiel des documents structurés et des langages de transformation (XSLT) qui s'y rapportent pour améliorer la recherche dans ces documents. Cette dernière exploite la totalité des éléments sémantiques (structure, annotations) et constitue une autre alternative à la recherche basée sur les métadonnées. De plus, la recherche basée sur les annotations et la structure offre comme avantage supplémentaire de permettre de retrouver, non seulement les documents eux-mêmes, mais aussi des parties de documents. Cette caractéristique apporte une amélioration considérable par rapport à la recherche par métadonnées qui ne donne accès qu'à des documents entiers. En conclusion nous montrerons, à travers des exemples appropriés, que selon le type de document : il est possible de procéder automatiquement à leur indexation pour faciliter la recherche de documents dès lors qu'il s'agit de documents non structurés ou d'exploiter directement leur contenu sémantique dès lors qu'il s'agit de documents structurés.
A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner.
This PhD thesis addresses the issue of alleviating the burden of developing ad hoc applications. Such applications have the particularity of running on mobile devices, communicating in a peer-to-peer manner and implement some proximity-based semantics. A typical example of such application can be a radar application where users see their avatar as well as the avatars of their friends on a map on their mobile phone. Such application become increasingly popular with the advent of the latest generation of mobile smart phones with their impressive computational power, their peer-to-peer communication capabilities and their location detection technology. Unfortunately, the existing programming support for such applications is limited, hence the need to address this issue in order to alleviate their development burden.
This thesis specifically tackles this problem by providing several tools for application development support. First, it provides the location-based publish/subscribe service (LPSS), a communication abstraction, which elegantly captures recurrent communication issues and thus allows to dramatically reduce the code complexity. LPSS is implemented in a modular manner in order to be able to target two different network architectures. One pragmatic implementation is aimed at mainstream infrastructure-based mobile networks, where mobile devices can communicate through fixed antennas. The other fully decentralized implementation targets emerging mobile ad hoc networks (MANETs), where no fixed infrastructure is available and communication can only occur in a peer-to-peer fashion. For each of these architectures, various implementation strategies tailored for different application scenarios that can be parametrized at deployment time. Second, this thesis provides two location-based message diffusion protocols, namely 6Shot broadcast and 6Shot multicast, specifically aimed at MANETs and fine tuned to be used as building blocks for LPSS. Finally this thesis proposes Phomo, a phone motion testing tool that allows to test proximity semantics of ad hoc applications without having to move around with mobile devices. These different developing support tools have been packaged in a coherent middleware framework called Pervaho.
Lors d'une recherche d'information, l'apprenant est très souvent confronté à des problèmes de guidage et de personnalisation. Ceux-ci sont d'autant plus importants que la recherche se fait dans un environnement ouvert tel que le Web. En effet, dans ce cas, il n'y a actuellement pas de contrôle de pertinence sur les ressources proposées pas plus que sur l'adéquation réelle aux besoins spécifiques de l'apprenant.
A travers l'étude de l'état de l'art, nous avons constaté l'absence d'un modèle de référence qui traite des problématiques liées (i) d'une part aux ressources d'apprentissage notamment à l'hétérogénéité de la structure et de la description et à la protection en terme de droits d'auteur et (ii) d'autre part à l'apprenant en tant qu'utilisateur notamment l'acquisition des éléments le caractérisant et la stratégie d'adaptation à lui offrir.
Notre objectif est de proposer un système adaptatif à base de ressources d'apprentissage issues d'un environnement à ouverture contrôlée. Celui-ci permet de générer automatiquement sans l'intervention d'un expert pédagogue un parcours d'apprentissage personnalisé à partir de ressources rendues disponibles par le biais de sources de confiance.
L'originalité de notre travail réside dans la proposition d'un modèle de référence dit de Lausanne qui est basé sur ce que nous considérons comme étant les meilleures pratiques des communautés : (i) du Web en terme de moyens d'ouverture, (ii) de l'hypermédia adaptatif en terme de stratégie d'adaptation et (iii) de l'apprentissage à distance en terme de manipulation des ressources d'apprentissage.
Dans notre modèle, la génération des parcours personnalisés se fait sur la base (i) de ressources d'apprentissage indexées et dont le degré de granularité en favorise le partage et la réutilisation. Les sources de confiance utilisées en garantissent l'utilité et la qualité.
(ii) de caractéristiques de l'utilisateur, compatibles avec les standards existants, permettant le passage de l'apprenant d'un environnement à un autre.
(iii) d'une adaptation à la fois individuelle et sociale.
Pour cela, le modèle de Lausanne propose :
(i) d'utiliser ISO/MLR (Metadata for Learning Resources) comme formalisme de description.
(ii) de décrire le modèle d'utilisateur avec XUN1 (eXtended User Model), notre proposition d'un modèle compatible avec les standards IEEE/PAPI et IMS/LIP.
(iii) d'adapter l'algorithme des fourmis au contexte de l'apprentissage à distance afin de générer des parcours personnalisés. La dimension individuelle est aussi prise en compte par la mise en correspondance de MLR et de XUM.
Pour valider notre modèle, nous avons développé une application et testé plusieurs scenarii mettant en action des utilisateurs différents à des moments différents. Nous avons ensuite procédé à des comparaisons entre ce que retourne le système et ce que suggère l'expert. Les résultats s'étant avérés satisfaisants dans la mesure où à chaque fois le système retourne un parcours semblable à celui qu'aurait proposé l'expert, nous sommes confortées dans notre approche.
Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions?
This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
La mondialisation des marchés, les mutations du contexte économique et enfin l'impact des nouvelles technologies de l'information ont obligé les entreprises à revoir la façon dont elles gèrent leurs capitaux intellectuel (gestion des connaissances) et humain (gestion des compétences). II est communément admis aujourd'hui que ceux-ci jouent un rôle particulièrement stratégique dans l'organisation. L'entreprise désireuse de se lancer dans une politique gestion de ces capitaux devra faire face à différents problèmes. En effet, afin de gérer ces connaissances et ces compétences, un long processus de capitalisation doit être réalisé. Celui-ci doit passer par différentes étapes comme l'identification, l'extraction et la représentation des connaissances et des compétences.
Pour cela, il existe différentes méthodes de gestion des connaissances et des compétences comme MASK, CommonKADS, KOD... Malheureusement, ces différentes méthodes sont très lourdes à mettre en oeuvre, et se cantonnent à certains types de connaissances et sont, par conséquent, plus limitées dans les fonctionnalités qu'elles peuvent offrir. Enfin, la gestion des compétences et la gestion des connaissances sont deux domaines dissociés alors qu'il serait intéressant d'unifier ces deux approches en une seule. En effet, les compétences sont très proches des connaissances comme le souligne la définition de la compétence qui suit : « un ensemble de connaissances en action dans un contexte donné ».
Par conséquent, nous avons choisi d'appuyer notre proposition sur le concept de compétence. En effet, la compétence est parmi les connaissances de l'entreprise l'une des plus cruciales, en particulier pour éviter la perte de savoir-faire ou pour pouvoir prévenir les besoins futurs de l'entreprise, car derrière les compétences des collaborateurs, se trouve l'efficacité de l'organisation. De plus, il est possible de décrire grâce à la compétence de nombreux autres concepts de l'organisation, comme les métiers, les missions, les projets, les formations... Malheureusement, il n'existe pas réellement de consensus sur la définition de la compétence. D'ailleurs, les différentes définitions existantes, même si elles sont pleinement satisfaisantes pour les experts, ne permettent pas de réaliser un système opérationnel.
Dans notre approche; nous abordons la gestion des compétences à l'aide d'une méthode de gestion des connaissances. En effet, de par leur nature même, connaissance et compétence sont intimement liées et donc une telle méthode est parfaitement adaptée à la gestion des compétences.
Afin de pouvoir exploiter ces connaissances et ces compétences nous avons dû, dans un premier temps, définir les concepts organisationnels de façon claire et computationnelle. Sur cette base, nous proposons une méthodologie de construction des différents référentiels d'entreprise (référentiel de compétences, des missions, des métiers...). Pour modéliser ces différents référentiels, nous avons choisi l'ontologie, car elle permet d'obtenir des définitions cohérentes et consensuelles aux concepts tout en supportant les diversités langagières.
Ensuite, nous cartographions les connaissances de l'entreprise (formations, missions, métiers...) sur ces différentes ontologies afin de pouvoir les exploiter et les diffuser.
Notre approche de la gestion des connaissances et de la gestion des compétences a permis la réalisation d'un outil offrant de nombreuses fonctionnalités comme la gestion des aires de mobilités, l'analyse stratégique, les annuaires ou encore la gestion des CV.
In this thesis we present the design of a systematic integrated computer-based approach for detecting potential disruptions from an industry perspective. Following the design science paradigm, we iteratively develop several multi-actor multi-criteria artifacts dedicated to environment scanning. The contributions of this thesis are both theoretical and practical. We demonstrate the successful use of multi-criteria decision-making methods for technology foresight. Furthermore, we illustrate the design of our artifacts using build and-evaluate loops supported with a field study of the Swiss mobile payment industry. To increase the relevance of this study, we systematically interview key Swiss experts for each design iteration. As a result, our research provides a realistic picture of the current situation in the Swiss mobile payment market and reveals previously undiscovered weak signals for future trends. Finally, we suggest a generic design process for environment scanning.
Les gouvernements des pays occidentaux ont dépensé des sommes importantes pour faciliter l'intégration des technologies de l'information et de la communication dans l'enseignement espérant trouver une solution économique à l'épineuse équation que l'on pourrait résumer par la célèbre formule " faire plus et mieux avec moins ". Cependant force est de constater que, malgré ces efforts et la très nette amélioration de la qualité de service des infrastructures, cet objectif est loin d'être atteint. Si nous pensons qu'il est illusoire d'attendre et d'espérer que la technologie peut et va, à elle seule, résoudre les problèmes de qualité de l'enseignement, nous croyons néanmoins qu'elle peut contribuer à améliorer les conditions d'apprentissage et participer de la réflexion pédagogique que tout enseignant devrait conduire avant de dispenser ses enseignements. Dans cette optique, et convaincu que la formation à distance offre des avantages non négligeables à condition de penser " autrement " l'enseignement, nous nous sommes intéressé à la problématique du développement de ce type d'applications qui se situent à la frontière entre les sciences didactiques, les sciences cognitives, et l'informatique. Ainsi, et afin de proposer une solution réaliste et simple permettant de faciliter le développement, la mise-à-jour, l'insertion et la pérennisation des applications de formation à distance, nous nous sommes impliqué dans des projets concrets. Au fil de notre expérience de terrain nous avons fait le constat que (i)la qualité des modules de formation flexible et à distance reste encore très décevante, entre autres parce que la valeur ajoutée que peut apporter l'utilisation des technologies n'est, à notre avis, pas suffisamment exploitée et que (ii)pour réussir tout projet doit, outre le fait d'apporter une réponse utile à un besoin réel, être conduit efficacement avec le soutien d'un " champion ". Dans l'idée de proposer une démarche de gestion de projet adaptée aux besoins de la formation flexible et à distance, nous nous sommes tout d'abord penché sur les caractéristiques de ce type de projet. Nous avons ensuite analysé les méthodologies de projet existantes dans l'espoir de pouvoir utiliser l'une, l'autre ou un panachage adéquat de celles qui seraient les plus proches de nos besoins. Nous avons ensuite, de manière empirique et par itérations successives, défini une démarche pragmatique de gestion de projet et contribué à l'élaboration de fiches d'aide à la décision facilitant sa mise en oeuvre. Nous décrivons certains de ses acteurs en insistant particulièrement sur l'ingénieur pédagogique que nous considérons comme l'un des facteurs clé de succès de notre démarche et dont la vocation est de l'orchestrer. Enfin, nous avons validé a posteriori notre démarche en revenant sur le déroulement de quatre projets de FFD auxquels nous avons participé et qui sont représentatifs des projets que l'on peut rencontrer dans le milieu universitaire. En conclusion nous pensons que la mise en oeuvre de notre démarche, accompagnée de la mise à disposition de fiches d'aide à la décision informatisées, constitue un atout important et devrait permettre notamment de mesurer plus aisément les impacts réels des technologies (i) sur l'évolution de la pratique des enseignants, (ii) sur l'organisation et (iii) sur la qualité de l'enseignement. Notre démarche peut aussi servir de tremplin à la mise en place d'une démarche qualité propre à la FFD. D'autres recherches liées à la réelle flexibilisation des apprentissages et aux apports des technologies pour les apprenants pourront alors être conduites sur la base de métriques qui restent à définir.
The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others.
After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes.
The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display.
Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
The Wagner hierarchy is known so far to be the most refined topological classification of ω-rational languages. Also, the algebraic study of formal languages shows that these ω-rational sets correspond precisely to the languages recognizable by finite pointed ω-semigroups. Within this framework, we provide a construction of the algebraic counterpart of the Wagner hierarchy. We adopt a hierarchical game approach, by translating the Wadge theory from the ω-rational language to the ω-semigroup context. More precisely, we first show that the Wagner degree is indeed a syntactic invariant. We then define a reduction relation on finite pointed ω-semigroups by means of a Wadge-like infinite two-player game. The collection of these algebraic structures ordered by this reduction is then proven to be isomorphic to the Wagner hierarchy, namely a well-founded and decidable partial ordering of width 2 and height $\omega^\omega$. We also describe a decidability procedure of this hierarchy: we introduce a graph representation of finite pointed ω-semigroups allowing to compute their precise Wagner degrees. The Wagner degree of every ω-rational language can therefore be computed directly on its syntactic image. We then show how to build a finite pointed ω-semigroup of any given Wagner degree. We finally describe the algebraic invariants characterizing every Wagner degree of this hierarchy.
Thesis in joint-supervision with the Université Paris-Diderot (Paris 7)
Chapter I presents the motivations of this dissertation by illustrating two gaps in the current body of knowledge that are worth filling, describes the research problem addressed by this thesis and presents the research methodology used to achieve this goal.
Chapter 2 shows a review of the existing literature showing that environment analysis is a vital strategic task, that it shall be supported by adapted information systems, and that there is thus a need for developing a conceptual model of the environment that provides a reference framework for better integrating the various existing methods and a more formal definition of the various aspect to support the development of suitable tools.
Chapter 3 proposes a conceptual model that specifies the various enviromnental aspects that are relevant for strategic decision making, how they relate to each other, and ,defines them in a more formal way that is more suited for information systems development.
Chapter 4 is dedicated to the evaluation of the proposed model on the basis of its application to a concrete environment to evaluate its suitability to describe the current conditions and potential evolution of a real environment and get an idea of its usefulness.
Chapter 5 goes a step further by assembling a toolbox describing a set of methods that can be used to analyze the various environmental aspects put forward by the model and by providing more detailed specifications for a number of them to show how our model can be used to facilitate their implementation as software tools.
Chapter 6 describes a prototype of a strategic decision support tool that allow the analysis of some of the aspects of the environment that are not well supported by existing tools and namely to analyze the relationship between multiple actors and issues. The usefulness of this prototype is evaluated on the basis of its application to a concrete environment.
Chapter 7 finally concludes this thesis by making a summary of its various contributions and by proposing further interesting research directions.