School of Computing
https://dl.comp.nus.edu.sg/handle/1900.100/1
SoC2018-01-14T14:46:09ZRandomized View Reconciliation in Permissionless Distributed Systems
https://dl.comp.nus.edu.sg/handle/1900.100/6863
Randomized View Reconciliation in Permissionless Distributed Systems
HOU, Ruomu; JAHJA, Irvan; LUU, Loi; SAXENA, Prateek; YU, Haifeng
In a sybil attack, an adversary creates a large number of fake identities/nodes and have them join the system. Computational puzzles have long been investigated as a possible sybil defense: If a node fails to solve the puzzle in a timely fashion, it will no longer be accepted by other nodes. However, it is still possible for a malicious node to behave in such a way that it is accepted by some honest nodes but not other honest nodes. This results in different honest nodes having different views on which set of nodes should form the system. Such view divergence, unfortunately, breaks the overarching assumption required by
many existing security protocols.
Partly spurred by the growing popularity of Bitcoin, researchers have recently formalized the above view divergence problem and proposed interesting solutions (which we call view reconciliation protocols). For example, in CRYPTO 2015, Andrychowicz and Dziembowski proposed a view reconciliation protocol with O (N) time complexity, with N being the number of honest nodes in the system. All existing view reconciliation protocols so far have a similar O(N) time complexity. As this paper’s main contribution, we propose a novel view reconciliation protocol with a time complexity of only O( ln N/ ln ln N ). To achieve such an exponential improvement, we aggressively exploit randomization.
Technical Report
Some Lower Bounds in Dynamic Networks with Oblivious Adversaries*
https://dl.comp.nus.edu.sg/handle/1900.100/6751
Some Lower Bounds in Dynamic Networks with Oblivious Adversaries*
JAHJA, Irvan; YU, Haifeng; ZHAO, Yuda
This paper considers several closely-related problems in synchronous dynamic networks with oblivious adversaries, and proves novel (d + poly(m)) lower bounds on their time complexity (in terms of rounds). Here d is the dynamic diameter of the dynamic network andmis the total number of nodes. Before this work, the only known lower bounds on these problems under oblivious adversaries were the trivial (d) lower bounds. Our novel lower bounds are hence the first non-trivial lower bounds and also the first lower bounds with a poly(m) term. Our proof relies on a novel reduction from a certain two-party communication complexity problem. Our central proof technique is unique in the sense that we consider the communication complexity with a special leaker. The leaker helps Alice and Bob in the two-party problem, by disclosing to Alice and Bob certain “non-critical”
information about the problem instance that they are solving.
How to Find the Best Rated Items on a Likert Scale and How Many Ratings Are Enough
https://dl.comp.nus.edu.sg/handle/1900.100/6430
How to Find the Best Rated Items on a Likert Scale and How Many Ratings Are Enough
LIU, Qing; BASU, Debabrota; GOEL, Shruti; ABDESSALEM, Talel; BRESSANE, Stéphane
One of the modern pillars of collaborative filtering and recommender systems is collection and exploitation of ratings from users. Likert scale is a psychometric quantifi er of ratings popular among the electronic commerce sites. In this paper, we consider the tasks of collecting Likert scale ratings of items and of fi nding the n-k best-rated items, i.e., the n items that are most likely to be the top-k in a ranking constructed from these ratings.
We devise an algorithm, Pundit, that computes the n-k best-rated items. Pundit uses the probability-generating function constructed from the Likert scale responses to avoid the combinatorial exploration of the possible outcomes and to compute the result efficiently. We empirically and comparatively evaluate with real data sets and discuss the effectiveness and efficiency of our and competing approaches. Our method is effective and competitively efficient.
Selection of the best-rated items meets, in practice, the major obstacle of the scarcity of ratings. We propose an approach that learns from the available data how many ratings are enough to meet a prescribed error and recommends how many additional ratings should be proactively
sought. We also empirically evaluate with real data sets the effectiveness of our method to recommend the collection of additional ratings. The results show that the approach is practical and effective.
2017-06-06T02:11:13ZLearn-as-you-go with Megh: Efficient Live Migration of Virtual Machines
https://dl.comp.nus.edu.sg/handle/1900.100/6289
Learn-as-you-go with Megh: Efficient Live Migration of Virtual Machines
BASU, Debabrota; WANG, Xiayang; HONG, Yang; CHEN, Haibo; BRESSAN, Stéphane
Cloud providers leverage live migration of virtual machines to reduce energy consumption and allocate resources efficiently in data centers. Each migration decision depends on three questions: when to move a virtual machine, which virtual machine to move and where to move it? Dynamic, uncertain and heterogeneous workloads running on virtual machines make such decisions difficult. Knowledge-based and heuristics-based algorithms are commonly used to tackle this problem. Knowledgebased algorithms, such as MaxWeight scheduling algorithms, are dependent on the specifics and the dynamics of the targeted Cloud architectures and applications. Heuristics-based algorithms, such as MMT algorithms, suffer from high variance and poor convergence because of their greedy approach. We propose a reinforcement learning approach. This approach does not require prior knowledge. It learns the dynamics of the workload as-itgoes. We formulate the problem of energy- and performance efficient resource management during live migration as a Markov decision process. While several learning algorithms are proposed to solve this problem, these algorithms remain confined to the academic realm as they face the curse of dimensionality. They are either not scalable in real-time, as it is the case of MadVM, or need an elaborate offline training, as it is the case of Q-learning. We propose an actor-critic algorithm, Megh, to overcome these deficiencies. Megh uses a novel dimensionality reduction scheme to project the combinatorially explosive state-action space to a polynomial dimensional space with a sparse basis. Megh has the capacity to learn uncertain dynamics and the ability to work in real-time. Megh is both scalable and robust. We implement Megh using the CloudSim toolkit and empirically evaluate its performance with the PlanetLab and the Google Cluster workloads. Experiments validate that Megh is more costeffective, incurs smaller execution overhead and is more scalable than MadVM and MMT. We explicate our choice of parameters through a sensitivity analysis.
2017-04-04T00:00:00Z