# Computer Science

## New submissions

[ total of 141 entries: 1-141 ]
[ showing up to 2000 entries per page: fewer | more ]

### New submissions for Fri, 19 Jan 18

[1]
Title: Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events
Comments: Conditionally accepted for inclusion in the 2018 ACM Conference on Human Factors in Computing Systems (CHI'18) Papers program
Subjects: Social and Information Networks (cs.SI); Computers and Society (cs.CY)

This paper investigates bias in coverage between Western and Arab media on Twitter after the November 2015 Beirut and Paris terror attacks. Using two Twitter datasets covering each attack, we investigate how Western and Arab media differed in coverage bias, sympathy bias, and resulting information propagation. We crowdsourced sympathy and sentiment labels for 2,390 tweets across four languages (English, Arabic, French, German), built a regression model to characterize sympathy, and thereafter trained a deep convolutional neural network to predict sympathy. Key findings show: (a) both events were disproportionately covered (b) Western media exhibited less sympathy, where each media coverage was more sympathetic towards the country affected in their respective region (c) Sympathy predictions supported ground truth analysis that Western media was less sympathetic than Arab media (d) Sympathetic tweets do not spread any further. We discuss our results in light of global news flow, Twitter affordances, and public perception impact.

[2]
Title: Device Caching for Network Offloading: Delay Minimization with Presence of User Mobility
Comments: To appear in IEEE Wireless Communication Letters
Subjects: Networking and Internet Architecture (cs.NI)

A delay-optimal caching problem (DOCP) in deviceto- device (D2D) networks with moblity is modelled. The problem arises in the context of achieving offloading using device caching, and the offloading effect is represented by the expected network load ratio (NLR) which is the percentage of data that has to be downloaded from the network. Compared with the related studies, this work considers minimizing delay with NLR guarantee in mobility scenarios. A lower bound of global optimum is derived, thus enabling performance benchmarking of any sub-optimal algorithm. For problem-solving, an effective search algorithm (ESA) is proposed based on the bound. Simulations are conducted to evaluate the effectiveness of the ESA algorithm.

[3]
Title: An Experimental Study of Cryptocurrency Market Dynamics
Comments: To appear in CHI 2018
Journal-ref: Peter Krafft, Nicol\'as Della Penna, Alex Pentland. (2018). An Experimental Study of Cryptocurrency Market Dynamics. ACM CHI Conference on Human Factors in Computing Systems (CHI)
Subjects: Computers and Society (cs.CY); Human-Computer Interaction (cs.HC); Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)

As cryptocurrencies gain popularity and credibility, marketplaces for cryptocurrencies are growing in importance. Understanding the dynamics of these markets can help to assess how viable the cryptocurrnency ecosystem is and how design choices affect market behavior. One existential threat to cryptocurrencies is dramatic fluctuations in traders' willingness to buy or sell. Using a novel experimental methodology, we conducted an online experiment to study how susceptible traders in these markets are to peer influence from trading behavior. We created bots that executed over one hundred thousand trades costing less than a penny each in 217 cryptocurrencies over the course of six months. We find that individual "buy" actions led to short-term increases in subsequent buy-side activity hundreds of times the size of our interventions. From a design perspective, we note that the design choices of the exchange we study may have promoted this and other peer influence effects, which highlights the potential social and economic impact of HCI in the design of digital institutions.

[4]
Title: Efficient Computation of the 8-point DCT via Summation by Parts
Comments: 13 pages, 4 figures, 2 tables
Journal-ref: J Sign Process Syst (2017)
Subjects: Data Structures and Algorithms (cs.DS); Numerical Analysis (cs.NA); Numerical Analysis (math.NA); Computation (stat.CO); Methodology (stat.ME)

This paper introduces a new fast algorithm for the 8-point discrete cosine transform (DCT) based on the summation-by-parts formula. The proposed method converts the DCT matrix into an alternative transformation matrix that can be decomposed into sparse matrices of low multiplicative complexity. The method is capable of scaled and exact DCT computation and its associated fast algorithm achieves the theoretical minimal multiplicative complexity for the 8-point DCT. Depending on the nature of the input signal simplifications can be introduced and the overall complexity of the proposed algorithm can be further reduced. Several types of input signal are analyzed: arbitrary, null mean, accumulated, and null mean/accumulated signal. The proposed tool has potential application in harmonic detection, image enhancement, and feature extraction, where input signal DC level is discarded and/or the signal is required to be integrated.

[5]
Title: Performance Analysis of Joint Pairing and Mode Selection in D2D Communications with FD Radios
Comments: 6 pages, 4 figures, accepted in WCNC 2018
Subjects: Information Theory (cs.IT)

In cellular-D2D networks, users can select the communication mode either direct and form D2D links or indirect and communicate with BS. In former case, users should perform pairing selection and choose their pairs. The main focus in this paper is proposing an analytical framework by using tools from stochastic geometry to address these two issues, i.e. i) mode selection for the user devices to be established in either cellular or D2D mode, which is done based on received power from BS influenced by a bias factor, and ii) investigation of choosing nth-nearest neighbor as the serving node for the receiver of interest, by considering full-duplex (FD) radios as well as half- duplex (HD) in the D2D links. The analytic and simulation results demonstrate that even though the bias factor determines the throughput of each mode, it does not have any influence on the system sum throughput. Furthermore, we demonstrate that despite of suffering from self-interference, FD-D2D results in higher system sum throughput as well as higher coverage probability in comparison to its counterpart, namely purely HD- D2D network.

[6]
Title: Random Construction of Partial MDS Codes
Subjects: Information Theory (cs.IT)

This work deals with partial MDS (PMDS) codes, a special class of locally repairable codes, used for distributed storage system. We first show that a known construction of these codes, using Gabidulin codes, can be extended to use any maximum rank distance code. Then we define a standard form for the generator matrices of PMDS codes and use this form to give an algebraic description of PMDS generator matrices. This implies that over a sufficiently large finite field a randomly chosen generator matrix in PMDS standard form generates a PMDS code with high probability. This also provides sufficient conditions on the field size for the existence of PMDS codes.

[7]
Title: On the Limited Communication Analysis and Design for Decentralized Estimation
Subjects: Systems and Control (cs.SY); Multiagent Systems (cs.MA)

This paper pertains to the analysis and design of decentralized estimation schemes that make use of limited communication. Briefly, these schemes equip the sensors with scalar states that iteratively merge the measurements and the state of other sensors to be used for state estimation. Contrarily to commonly used distributed estimation schemes, the only information being exchanged are scalars, there is only one common time-scale for communication and estimation, and the retrieval of the state of the system and sensors is achieved in finite-time. We extend previous work to a more general setup and provide necessary and sufficient conditions required for the communication between the sensors that enable the use of limited communication decentralized estimation~schemes. Additionally, we discuss the cases where the sensors are memoryless, and where the sensors might not have the capacity to discern the contributions of other sensors. Based on these conditions and the fact that communication channels incur a cost, we cast the problem of finding the minimum cost communication graph that enables limited communication decentralized estimation schemes as an integer programming problem.

[8]
Title: Network Representation Learning: A Survey
Subjects: Social and Information Networks (cs.SI); Learning (cs.LG); Machine Learning (stat.ML)

With the widespread use of information technologies, information networks have increasingly become popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of society, information diffusion, and different patterns of communication. However, the large scale of information networks often makes network analytic tasks computationally expensive and intractable. Recently, network representation learning has been proposed as a new learning paradigm that embeds network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a thorough review of the current literature on network representation learning in the field of data mining and machine learning. We propose a new categorization to analyze and summarize state-of-the-art network representation learning techniques according to the methodology they employ and the network information they preserve. Finally, to facilitate research on this topic, we summarize benchmark datasets and evaluation methodologies, and discuss open issues and future research directions in this field.

[9]
Title: Time Matters: Multi-scale Temporalization of Social Media Popularity
Comments: accepted in ACM Multimedia 2016
Subjects: Social and Information Networks (cs.SI); Artificial Intelligence (cs.AI); Multimedia (cs.MM)

The evolution of social media popularity exhibits rich temporality, i.e., popularities change over time at various levels of temporal granularity. This is influenced by temporal variations of public attentions or user activities. For example, popularity patterns of street snap on Flickr are observed to depict distinctive fashion styles at specific time scales, such as season-based periodic fluctuations for Trench Coat or one-off peak in days for Evening Dress. However, this fact is often overlooked by existing research of popularity modeling. We present the first study to incorporate multiple time-scale dynamics into predicting online popularity. We propose a novel computational framework in the paper, named Multi-scale Temporalization, for estimating popularity based on multi-scale decomposition and structural reconstruction in a tensor space of user, post, and time by joint low-rank constraints. By considering the noise caused by context inconsistency, we design a data rearrangement step based on context aggregation as preprocessing to enhance contextual relevance of neighboring data in the tensor space. As a result, our approach can leverage multiple levels of temporal characteristics and reduce the noise of data decomposition to improve modeling effectiveness. We evaluate our approach on two large-scale Flickr image datasets with over 1.8 million photos in total, for the task of popularity prediction. The results show that our approach significantly outperforms state-of-the-art popularity prediction techniques, with a relative improvement of 10.9%-47.5% in terms of prediction accuracy.

[10]
Title: NDlib: a Python Library to Model and Analyze Diffusion Processes Over Complex Networks
Journal-ref: International Journal of Data Science and Analytics, 2018
Subjects: Social and Information Networks (cs.SI)

Nowadays the analysis of dynamics of and on networks represents a hot topic in the Social Network Analysis playground. To support students, teachers, developers and researchers in this work we introduce a novel framework, namely NDlib, an environment designed to describe diffusion simulations. NDlib is designed to be a multi-level ecosystem that can be fruitfully used by different user segments. For this reason, upon NDlib, we designed a simulation server that allows remote execution of experiments as well as an online visualization tool that abstracts its programmatic interface and makes available the simulation platform to non-technicians.

[11]
Title: On Spectral Graph Embedding: A Non-Backtracking Perspective and Graph Approximation
Comments: SDM 2018 (Full version including all proofs)
Subjects: Social and Information Networks (cs.SI); Data Structures and Algorithms (cs.DS)

Graph embedding has been proven to be efficient and effective in facilitating graph analysis. In this paper, we present a novel spectral framework called NOn-Backtracking Embedding (NOBE), which offers a new perspective that organizes graph data at a deep level by tracking the flow traversing on the edges with backtracking prohibited. Further, by analyzing the non-backtracking process, a technique called graph approximation is devised, which provides a channel to transform the spectral decomposition on an edge-to-edge matrix to that on a node-to-node matrix. Theoretical guarantees are provided by bounding the difference between the corresponding eigenvalues of the original graph and its graph approximation. Extensive experiments conducted on various real-world networks demonstrate the efficacy of our methods on both macroscopic and microscopic levels, including clustering and structural hole spanner detection.

[12]
Title: Active Community Detection: A Maximum Likelihood Approach
Subjects: Social and Information Networks (cs.SI); Learning (cs.LG); Machine Learning (stat.ML)

We propose novel semi-supervised and active learning algorithms for the problem of community detection on networks. The algorithms are based on optimizing the likelihood function of the community assignments given a graph and an estimate of the statistical model that generated it. The optimization framework is inspired by prior work on the unsupervised community detection problem in Stochastic Block Models (SBM) using Semi-Definite Programming (SDP). In this paper we provide the next steps in the evolution of learning communities in this context which involves a constrained semi-definite programming algorithm, and a newly presented active learning algorithm. The active learner intelligently queries nodes that are expected to maximize the change in the model likelihood. Experimental results show that this active learning algorithm outperforms the random-selection semi-supervised version of the same algorithm as well as other state-of-the-art active learning algorithms. Our algorithms significantly improved performance is demonstrated on both real-world and SBM-generated networks even when the SBM has a signal to noise ratio (SNR) below the known unsupervised detectability threshold.

[13]
Title: On the Scalability of the GPUexplore Explicit-State Model Checker
Authors: Nathan Cassee (Eindhoven University of Technology, Eindhoven, The Netherlands), Thomas Neele (Eindhoven University of Technology, Eindhoven, The Netherlands), Anton Wijs (Eindhoven University of Technology, Eindhoven, The Netherlands)
Comments: In Proceedings GaM 2017, arXiv:1712.08345
Journal-ref: EPTCS 263, 2017, pp. 38-52
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Data Structures and Algorithms (cs.DS)

The use of graphics processors (GPUs) is a promising approach to speed up model checking to such an extent that it becomes feasible to instantly verify software systems during development. GPUexplore is an explicit-state model checker that runs all its computations on the GPU. Over the years it has been extended with various techniques, and the possibilities to further improve its performance have been continuously investigated. In this paper, we discuss how the hash table of the tool works, which is at the heart of its functionality. We propose an alteration of the hash table that in isolated experiments seems promising, and analyse its effect when integrated in the tool. Furthermore, we investigate the current scalability of GPUexplore, by experimenting both with input models of varying sizes and running the tool on one of the latest GPUs of NVIDIA.

[14]
Title: Integrating Remote Attestation with Transport Layer Security
Subjects: Cryptography and Security (cs.CR)

Intel(R) Software Guard Extensions (Intel(R) SGX) is a promising technology to securely process information in otherwise untrusted environments. An important aspect of Intel SGX is the ability to perform remote attestation to assess the endpoint's trustworthiness. Ultimately, remote attestation will result in an attested secure channel to provision secrets to the enclave.
We seamlessly combine Intel SGX remote attestation with the establishment of a standard Transport Layer Security (TLS) connection. Remote attestation is performed during the connection setup. To achieve this, we neither change the TLS protocol, nor do we modify existing protocol implementations.
We have prototype implementations for three widely used open-source TLS libraries: OpenSSL, wolfSSL and mbedTLS. We describe the requirements, design and implementation details to seamlessly bind attested TLS endpoints to Intel SGX enclaves.

[15]
Title: The Complexity of Subdivision for Diameter-Distance Tests
Subjects: Symbolic Computation (cs.SC)

We present a general framework for analyzing the complexity of subdivision-based algorithms whose tests are based on the sizes of regions and their distance to certain sets (often varieties) intrinsic to the problem under study. We call such tests diameter-distance tests. We illustrate that diameter-distance tests are common in the literature by proving that many interval arithmetic-based tests are, in fact, diameter-distance tests. For this class of algorithms, we provide both non-adaptive bounds for the complexity, based on separation bounds, as well as adaptive bounds, by applying the framework of continuous amortization.
Using this structure, we provide the first complexity analysis for the algorithm by Plantinga and Vegeter for approximating real implicit curves and surfaces. We present both adaptive and non-adaptive a priori worst-case bounds on the complexity of this algorithm both in terms of the number of subregions constructed and in terms of the bit complexity for the construction. Finally, we construct families of hypersurfaces to prove that our bounds are tight.

[16]
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)

Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the network edge, thereby meeting the latency requirements of many emerging mobile applications and saving backhaul network bandwidth. Although many existing works have studied computation offloading policies, service caching is an equally, if not more important, design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related databases/libraries in the edge server (e.g. MEC-enabled BS), thereby enabling corresponding computation tasks to be executed. Because only a small number of application services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the edge computing performance. In this paper, we investigate the extremely compelling but much less studied problem of dynamic service caching in MEC-enabled dense cellular networks. We propose an efficient online algorithm, called OREO, which jointly optimizes dynamic service caching and task offloading to address a number of key challenges in MEC systems, including service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination. Our algorithm is developed based on Lyapunov optimization and Gibbs sampling, works online without requiring future information, and achieves provable close-to-optimal performance. Simulation results show that our algorithm can effectively reduce computation latency for end users while keeping energy consumption low.

[17]
Title: Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering
Subjects: Information Theory (cs.IT)

In Compressive Sensing theory and its applications, quantization of signal measurements, as integrated into any realistic sensing model, impacts the quality of signal reconstruction. In fact, there even exist incompatible combinations of quantization functions (e.g., the 1-bit sign function) and sensing matrices (e.g., Bernoulli) that cannot lead to an arbitrarily low reconstruction error when the number of observations increases.
This work shows that, for a scalar and uniform quantization, provided that a uniform random vector, or "random dithering", is added to the compressive measurements of a low-complexity signal (e.g., a sparse or compressible signal, or a low-rank matrix) before quantization, a large class of random matrix constructions known to respect the restricted isometry property (RIP) are made "compatible" with this quantizer. This compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the "projected back projection" (PBP), whose reconstruction error is proved to decay when the number of quantized measurements increases.
Despite the simplicity of PBP, which amounts to projecting the back projection of the compressive observations (obtained from their multiplication by the adjoint sensing matrix) onto the low-complexity set containing the observed signal, we also prove that given a RIP matrix and for a single realization of the dithering, this reconstruction error decay is also achievable uniformly for the sensing of all signals in the considered low-complexity set.
We finally confirm empirically these observations in several sensing contexts involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial Discrete Cosine Transform (DCT) matrices.

[18]
Title: Sparse Activity Detection for Massive Connectivity
Comments: 15 pages, 7 figures; accepted at TSP
Subjects: Information Theory (cs.IT)

This paper considers the massive connectivity application in which a large number of potential devices communicate with a base-station (BS) in a sporadic fashion. The detection of device activity pattern together with the estimation of the channel are central problems in such a scenario. Due to the large number of potential devices in the network, the devices need to be assigned non-orthogonal signature sequences. The main objective of this paper is to show that by using random signature sequences and by exploiting sparsity in the user activity pattern, the joint user detection and channel estimation problem can be formulated as a compressed sensing single measurement vector (SMV) problem or multiple measurement vector (MMV) problem, depending on whether the BS has a single antenna or multiple antennas, and be efficiently solved using an approximate message passing (AMP) algorithm. This paper proposes an AMP algorithm design that exploits the statistics of the wireless channel and provides an analytical characterization of the probabilities of false alarm and missed detection by using the state evolution. We consider two cases depending on whether the large-scale component of the channel fading is known at the BS and design the minimum mean squared error (MMSE) denoiser for AMP according to the channel statistics. Simulation results demonstrate the substantial advantage of exploiting the statistical channel information in AMP design; however, knowing the large-scale fading component does not offer tangible benefits. For the multiple-antenna case, we employ two different AMP algorithms, namely the AMP with vector denoiser and the parallel AMP-MMV, and quantify the benefit of deploying multiple antennas at the BS.

[19]
Title: A Pipeline for Post-Crisis Twitter Data Acquisition
Comments: 6 pages, 4 figures, Workshop on Social Web in Emergency and Disaster Management 2018 at the ACM WSDM Conference
Subjects: Information Retrieval (cs.IR)

Due to instant availability of data on social media platforms like Twitter, and advances in machine learning and data management technology, real-time crisis informatics has emerged as a prolific research area in the last decade. Although several benchmarks are now available, especially on portals like CrisisLex, an important, practical problem that has not been addressed thus far is the rapid acquisition and benchmarking of data from free, publicly available streams like the Twitter API. In this paper, we present ongoing work on a pipeline for facilitating immediate post-crisis data collection, curation and relevance filtering from the Twitter API. The pipeline is minimally supervised, alleviating the need for feature engineering by including a judicious mix of data preprocessing and fast text embeddings, along with an active learning framework. We illustrate the utility of the pipeline by describing a recent case study wherein it was used to collect and analyze millions of tweets in the immediate aftermath of the Las Vegas shootings.

[20]
Title: Nonuniform Reductions and NP-Completeness
Subjects: Computational Complexity (cs.CC)

Nonuniformity is a central concept in computational complexity with powerful connections to circuit complexity and randomness. Nonuniform reductions have been used to study the isomorphism conjecture for NP and completeness for larger complexity classes. We study the power of nonuniform reductions for NP-completeness, obtaining both separations and upper bounds for nonuniform completeness vs uniform completeness in NP.
Under various hypotheses, we obtain the following separations:
1. There is a set complete for NP under nonuniform many-one reductions, but not under uniform many-one reductions. This is true even with a single bit of nonuniform advice.
2. There is a set complete for NP under nonuniform many-one reductions with polynomial-size advice, but not under uniform Turing reductions. That is, polynomial nonuniformity is stronger than a polynomial number of queries.
3. For any fixed polynomial p(n), there is a set complete for NP under uniform 2-truth-table reductions, but not under nonuniform many-one reductions that use p(n) advice. That is, giving a uniform reduction a second query makes it more powerful than a nonuniform reduction with fixed polynomial advice.
4. There is a set complete for NP under nonuniform many-one reductions with polynomial advice, but not under nonuniform many-one reductions with logarithmic advice. This hierarchy theorem also holds for other reducibilities, such as truth-table and Turing.
We also consider uniform upper bounds on nonuniform completeness. Hirahara (2015) showed that unconditionally every set that is complete for NP under nonuniform truth-table reductions that use logarithmic advice is also uniformly Turing-complete. We show that under a derandomization hypothesis, the same statement for truth-table reductions and truth-table completeness also holds.

[21]
Title: Nondeterminisic Sublinear Time Has Measure 0 in P
Subjects: Computational Complexity (cs.CC)

The measure hypothesis is a quantitative strengthening of the P != NP conjecture which asserts that NP is a nonnegligible subset of EXP. Cai, Sivakumar, and Strauss (1997) showed that the analogue of this hypothesis in P is false. In particular, they showed that NTIME[n^{1/11}] has measure 0 in P. We improve on their result to show that the class of all languages decidable in nondeterministic sublinear time has measure 0 in P. Our result is based on DNF width and holds for all four major notions of measure on P.

[22]
Title: Perceived Audiovisual Quality Modelling based on Decison Trees, Genetic Programming and Neural Networks
Subjects: Multimedia (cs.MM)

Our objective is to build machine learning based models that predict audiovisual quality directly from a set of correlated parameters that are extracted from a target quality dataset. We have used the bitstream version of the INRS audiovisual quality dataset that reflects contemporary real-time configurations for video frame rate, video quantization, noise reduction parameters and network packet loss rate. We have utilized this dataset to build bitstream perceived quality estimation models based on the Random Forests, Bagging, Deep Learning and Genetic Programming methods.
We have taken an empirical approach and have generated models varying from very simple to the most complex depending on the number of features used from the quality dataset. Random Forests and Bagging models have overall generated the most accurate results in terms of RMSE and Pearson correlation coefficient values. Deep Learning and Genetic Programming based bitstream models have also achieved good results but that high performance was observed only with a limited range of features. We have also obtained the epsilon-insensitive RMSE values for each model and have computed the significance of the difference between the correlation coefficients.
Overall we conclude that computing the bitstream information is worth the effort it takes to generate and helps to build more accurate models for real-time communications. However, it is useful only for the deployment of the right algorithms with the carefully selected subset of the features. The dataset and tools that have been developed during this research are publicly available for research and development purposes.

[23]
Title: Sparsely Connected Convolutional Networks
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Residual learning with skip connections permits training ultra-deep neural networks and obtains superb performance. Building in this direction, DenseNets proposed a dense connection structure where each layer is directly connected to all of its predecessors. The densely connected structure leads to better information flow and feature reuse. However, the overly dense skip connections also bring about the problems of potential risk of overfitting, parameter redundancy and large memory consumption. In this work, we analyze the feature aggregation patterns of ResNets and DenseNets under a uniform aggregation view framework. We show that both structures densely gather features from previous layers in the network but combine them in their respective ways: summation (ResNets) or concatenation (DenseNets). We compare the strengths and drawbacks of these two aggregation methods and analyze their potential effects on the networks' performance. Based on our analysis, we propose a new structure named SparseNets which achieves better performance with fewer parameters than DenseNets and ResNets.

[24]
Title: Batch Auction Design For Cloud Container Services
Subjects: Computer Science and Game Theory (cs.GT); Distributed, Parallel, and Cluster Computing (cs.DC)

Cloud containers represent a new, light-weight alternative to virtual machines in cloud computing. A user job may be described by a container graph that specifies the resource profile of each container and container dependence relations. This work is the first in the cloud computing literature that designs efficient market mechanisms for container based cloud jobs. Our design targets simultaneously incentive compatibility, computational efficiency, and economic efficiency. It further adapts the idea of batch online optimization into the paradigm of mechanism design, leveraging agile creation of cloud containers and exploiting delay tolerance of elastic cloud jobs. The new and classic techniques we employ include: (i) compact exponential optimization for expressing and handling non-traditional constraints that arise from container dependence and job deadlines; (ii) the primal-dual schema for designing efficient approximation algorithms for social welfare maximization; and (iii) posted price mechanisms for batch decision making and truthful payment design. Theoretical analysis and trace-driven empirical evaluation verify the efficacy of our container auction algorithms.

[25]
Title: Unsupervised Hashtag Retrieval and Visualization for Crisis Informatics
Comments: 2 pages, 3 figures, Workshop on Social Web in Emergency and Disaster Management at ACM WSDM 2018
Subjects: Information Retrieval (cs.IR)

In social media like Twitter, hashtags carry a lot of semantic information and can be easily distinguished from the main text. Exploring and visualizing the space of hashtags in a meaningful way can offer important insights into a dataset, especially in crisis situations. In this demonstration paper, we present a functioning prototype, HashViz, that ingests a corpus of tweets collected in the aftermath of a crisis situation (such as the Las Vegas shootings) and uses the fastText bag-of-tricks semantic embedding algorithm (from Facebook Research) to embed words and hashtags into a vector space. Hashtag vectors obtained in this way can be visualized using the t-SNE dimensionality reduction algorithm in 2D. Although multiple Twitter visualization platforms exist, HashViz is distinguished by being simple, scalable, interactive and portable enough to be deployed on a server for million-tweet corpora collected in the aftermath of arbitrary disasters, without special-purpose installation, technical expertise, manual supervision or costly software or infrastructure investment. Although simple, we show that HashViz offers an intuitive way to summarize, and gain insight into, a developing crisis situation. HashViz is also completely unsupervised, requiring no manual inputs to go from a raw corpus to a visualization and search interface. Using the recent Las Vegas mass shooting massacre as a case study, we illustrate the potential of HashViz using only a web browser on the client side.

[26]
Title: Scheduling and Tiling Reductions on Realistic Machines
Authors: Nirmal Prajapati
Subjects: Programming Languages (cs.PL)

Computations, where the number of results is much smaller than the input data and are produced through some sort of accumulation, are called Reductions. Reductions appear in many scientific applications. Usually, reductions admit an associative and commutative binary operator over accumulation. Reductions are therefore highly parallel. Given unbounded fan-in, one can execute a reduction in constant/linear time provided that the data is available. However, due to the fact that real machines have bounded fan-in, accumulations cannot be performed in one time step and have to be broken into parts. Thus, a (partial) serialization of reductions becomes necessary. This makes scheduling reductions a difficult and interesting problem.
There have been a number of research works in the context of scheduling reductions. We focus on the scheduling techniques presented in Gupta et al., identify a potential issue in their scheduling algorithm and provide a solution. In addition, we demonstrate how these scheduling techniques can be extended to "tile" reductions and briefly survey other studies that address the problem of scheduling reductions.

[27]
Title: How can social planners prevent disappointment in an election?
Comments: 19 pages, 1 figure, and 2 tables
Subjects: Multiagent Systems (cs.MA)

Mechanism design is concerned with settings where a policy maker (or social planner) faces the problem of aggregating the announced preferences of multiple agents into a collective (or social), system-wide decision. One of the most important ways for aggregation preference used in a multi agent system is using election. In an election, the aim is to select the candidate who reects the common will of the whole society. Despite the importance of this subject, in the real world situations, sometimes under special circumstances, the result of the election is completely an antithesis of the purpose of those who execute it or the election leads to the dissatisfaction of a large amount of people. For analyzing these situations, a notion is discussed in the present paper called social disappointment and then new protocols are proposed to prevent social disappointment. A version of the impossibility theorem is stated and proved regarding social disappointment in elections. In the end, the numerical results obtained by simulating the voting protocols of plurality and Hare system are given, to show that Hare system is a little more seccessful than plurality to prevent social disappointment.

[28]
Title: On the influence of Dice loss function in multi-class organ segmentation of abdominal CT using 3D fully convolutional networks
Comments: presented at MI-ken, November 2017, Takamatsu, Japan (this http URL)
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Deep learning-based methods achieved impressive results for the segmentation of medical images. With the development of 3D fully convolutional networks (FCNs), it has become feasible to produce improved results for multi-organ segmentation of 3D computed tomography (CT) images. The results of multi-organ segmentation using deep learning-based methods not only depend on the choice of networks architecture, but also strongly rely on the choice of loss function. In this paper, we present a discussion on the influence of Dice-based loss functions for multi-class organ segmentation using a dataset of abdominal CT volumes. We investigated three different types of weighting the Dice loss functions based on class label frequencies (uniform, simple and square) and evaluate their influence on segmentation accuracies. Furthermore, we compared the influence of different initial learning rates. We achieved average Dice scores of 81.3%, 59.5% and 31.7% for uniform, simple and square types of weighting when the learning rate is 0.001, and 78.2%, 81.0% and 58.5% for each weighting when the learning rate is 0.01. Our experiments indicated a strong relationship between class balancing weights and initial learning rate in training.

[29]
Title: Security in Mobile Edge Caching with Reinforcement Learning
Subjects: Cryptography and Security (cs.CR)

Mobile edge computing usually uses cache to support multimedia contents in 5G mobile Internet to reduce the computing overhead and latency. Mobile edge caching (MEC) systems are vulnerable to various attacks such as denial of service attacks and rogue edge attacks. This article investigates the attack models in MEC systems, focusing on both the mobile offloading and the caching procedures. In this paper, we propose security solutions that apply reinforcement learning (RL) techniques to provide secure offloading to the edge nodes against jamming attacks. We also present light-weight authentication and secure collaborative caching schemes to protect data privacy. We evaluate the performance of the RL-based security solution for mobile edge caching and discuss the challenges that need to be addressed in the future.

[30]
Title: Citation Analysis of Innovative ICT and Advances of Governance (2008-2017)
Authors: Shuhua (Monica)Liu, Liting Pan, Xiaowei Chen
Subjects: Social and Information Networks (cs.SI); Computers and Society (cs.CY); Digital Libraries (cs.DL)

This paper opens by introducing the Internet Plus Government (IPG), a new government initiative emerging in the last decade. To understand benefits and challenges associated with this initiative worldwide, we conducted analyses on research articles published in the e-governance area between 2008 and 2017. Content analysis and citation analysis were performed on 2105 articles to address three questions: (1) What types of new ICT have been adopted in the IPG initiative in the past decade? (2) How did scholars investigate interactions between the new ICTs and governance core to IPG? (3) How did the new ICTs interact and shape while also being shaped by the evolution of governance in the past decade? Our analysis suggests that IPG initiative has enriched the government information infrastructure. It presented opportunities to accumulate and use huge volume of data for better decision making and proactive government-citizen interaction. At the same time, the advance of open data, the widespread use of social media and the potential of data analytics also generated great pressure to address challenging questions and issues in the domain of e-democracy.

[31]
Title: A Large-Scale Empirical Comparison of Static and Dynamic Test Case Prioritization Techniques
Comments: 12 pages, Accepted in the proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE'16)
Subjects: Software Engineering (cs.SE)

The large body of existing research in Test Case Prioritization (TCP) techniques, can be broadly classified into two categories: dynamic techniques (that rely on run-time execution information) and static techniques (that operate directly on source and test code). Absent from this current body of work is a comprehensive study aimed at understanding and evaluating the static approaches and comparing them to dynamic approaches on a large set of projects.
In this work, we perform the first extensive study aimed at empirically evaluating four static TCP techniques comparing them with state-of-research dynamic TCP techniques at different test-case granularities (e.g., method and class-level) in terms of effectiveness, efficiency and similarity of faults detected. This study was performed on 30 real-word Java programs encompassing 431 KLoC. In terms of effectiveness, we find that the static call-graph-based technique outperforms the other static techniques at test-class level, but the topic-model-based technique performs better at test-method level. In terms of efficiency, the static call-graph-based technique is also the most efficient when compared to other static techniques. When examining the similarity of faults detected for the four static techniques compared to the four dynamic ones, we find that on average, the faults uncovered by these two groups of techniques are quite dissimilar, with the top 10% of test cases agreeing on only 25% - 30% of detected faults. This prompts further research into the severity/importance of faults uncovered by these techniques, and into the potential for combining static and dynamic information for more effective approaches.

[32]
Title: Extend the shallow part of Single Shot MultiBox Detector via Convolutional Neural Network
Comments: 7 pages, 3 figures, 3 tables
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Single Shot MultiBox Detector (SSD) is one of the fastest algorithms in the current object detection field, which uses fully convolutional neural network to detect all scaled objects in an image. Deconvolutional Single Shot Detector (DSSD) is an approach which introduces more context information by adding the deconvolution module to SSD. And the mean Average Precision (mAP) of DSSD on PASCAL VOC2007 is improved from SSD's 77.5% to 78.6%. Although DSSD obtains higher mAP than SSD by 1.1%, the frames per second (FPS) decreases from 46 to 11.8. In this paper, we propose a single stage end-to-end image detection model called ESSD to overcome this dilemma. Our solution to this problem is to cleverly extend better context information for the shallow layers of the best single stage (e.g. SSD) detectors. Experimental results show that our model can reach 79.4% mAP, which is higher than DSSD and SSD by 0.8 and 1.9 points respectively. Meanwhile, our testing speed is 25 FPS in Titan X GPU which is more than double the original DSSD.

[33]
Title: Rate-Optimal Streaming Codes for Channels with Burst and Isolated Erasures
Comments: shorter version submitted to ISIT 2018
Subjects: Information Theory (cs.IT)

Recovery of data packets from packet erasures in a timely manner is critical for many streaming applications. An early paper by Martinian and Sundberg introduced a framework for streaming codes and designed rate-optimal codes that permit delay-constrained recovery from an erasure burst of length up to $B$. A recent work by Badr et al. extended this result and introduced a sliding-window channel model $\mathcal{C}(N,B,W)$. Under this model, in a sliding-window of width $W$, one of the following erasure patterns are possible (i) a burst of length at most $B$ or (ii) at most $N$ (possibly non-contiguous) arbitrary erasures. Badr et al. obtained a rate upper bound for streaming codes that can recover with a time delay $T$, from any erasure patterns permissible under the $\mathcal{C}(N,B,W)$ model. However, constructions matching the bound were absent, except for a few parameter sets. In this paper, we present an explicit family of codes that achieves the rate upper bound for all feasible parameters $N$, $B$, $W$ and $T$.

[34]
Title: On-Device Bug Reporting for Android Applications
Comments: 2 pages, Accepted in the Proceedings of 4th IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft'17)
Subjects: Software Engineering (cs.SE)

Bugs that surface in mobile applications can be difficult to reproduce and fix due to several confounding factors including the highly GUI-driven nature of mobile apps, varying contextual states, differing platform versions and device fragmentation. It is clear that developers need support in the form of automated tools that allow for more precise reporting of application defects in order to facilitate more efficient and effective bug fixes. In this paper, we present a tool aimed at supporting application testers and developers in the process of On-Device Bug Reporting. Our tool, called ODBR, leverages the uiautomator framework and low-level event stream capture to offer support for recording and replaying a series of input gesture and sensor events that describe a bug in an Android application.

[35]
Title: The Utility Cost of Robust Privacy Guarantees
Subjects: Information Theory (cs.IT); Cryptography and Security (cs.CR)

Consider a data publishing setting for a data set with public and private features. The objective of the publisher is to maximize the amount of information about the public features in a revealed data set, while keeping the information leaked about the private features bounded. The goal of this paper is to analyze the performance of privacy mechanisms that are constructed to match the distribution learned from the data set. Two distinct scenarios are considered: (i) mechanisms are designed to provide a privacy guarantee for the learned distribution; and (ii) mechanisms are designed to provide a privacy guarantee for every distribution in a given neighborhood of the learned distribution. For the first scenario, given any privacy mechanism, upper bounds on the difference between the privacy-utility guarantees for the learned and true distributions are presented. In the second scenario, upper bounds on the reduction in utility incurred by providing a uniform privacy guarantee are developed.

[36]
Title: An Overview of Machine Teaching
Comments: A tutorial document grown out of NIPS 2017 Workshop on Teaching Machines, Robots, and Humans
Subjects: Learning (cs.LG)

In this paper we try to organize machine teaching as a coherent set of ideas. Each idea is presented as varying along a dimension. The collection of dimensions then form the problem space of machine teaching, such that existing teaching problems can be characterized in this space. We hope this organization allows us to gain deeper understanding of individual teaching problems, discover connections among them, and identify gaps in the field.

[37]
Title: Faster Algorithms for Large-scale Machine Learning using Simple Sampling Techniques
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

Now a days, the major challenge in machine learning is the Big~Data' challenge. The big data problems due to large number of data points or large number of features in each data point, or both, the training of models have become very slow. The training time has two major components: Time to access the data and time to process the data. In this paper, we have proposed one possible solution to handle the big data problems in machine learning. The focus is on reducing the training time through reducing data access time by proposing systematic sampling and cyclic/sequential sampling to select mini-batches from the dataset. To prove the effectiveness of proposed sampling techniques, we have used Empirical Risk Minimization, which is commonly used machine learning problem, for strongly convex and smooth case. The problem has been solved using SAG, SAGA, SVRG, SAAG-II and MBSGD (Mini-batched SGD), each using two step determination techniques, namely, constant step size and backtracking line search method. Theoretical results prove the same convergence for systematic sampling, cyclic sampling and the widely used random sampling technique, in expectation. Experimental results with bench marked datasets prove the efficacy of the proposed sampling techniques.

[38]
Title: Enhancing Bug Reports for Mobile Apps
Authors: Kevin Moran
Comments: 77 pages, MS Thesis presented to the faculty @ the College of William & Mary
Subjects: Software Engineering (cs.SE)

The modern software development landscape has seen a shift in focus toward mobile applications as "smart" devices near ubiquitous adoption. Due to this trend, the complexity of mobile applications has been increasing, making development and maintenance particularly challenging. However, it is clear that current bug tracking systems are not able effectively support construction of reports with actionable information that will directly lead to a bug's resolution. To address the need for an improved reporting system, we introduce a novel solution, called FUSION, that helps users auto-complete reproduction steps in bug reports for mobile apps. FUSION links information, that users provide, to program artifacts extracted through static and dynamic analysis performed before testing or release. The approach that FUSION employs is generalizable to other current mobile software platforms, and constitutes a new method by which off-device bug reporting can be conducted for mobile software projects. We evaluate FUSION by conducting a study that quantitatively and qualitatively measures the user experience of the system for both reporting and reproducing bugs, as well as the quality of the bug reports it produces. In a study involving 28 participants we apply FUSION to support the maintenance tasks of reporting and reproducing defects on 15 real-world bugs found in 14 open source Android apps. Our results demonstrate that FUSION allows for more reliable reproduction of bugs from reports by aiding users in reporting more detailed application-specific information compared to traditional bug tracking systems.

[39]
Title: Fixing Bug Reporting for Mobile and GUI-Based Applications
Authors: Kevin Moran
Comments: 4 pages, accepted to the Doctoral Symposium at the 38th ACM/IEEE International Conference on Software Engineering (ICSE'16)
Subjects: Software Engineering (cs.SE)

Smartphones and tablets have established themselves as mainstays in the modern computing landscape. It is conceivable that in the near future such devices may supplant laptops and desktops, becoming many users primary means of carrying out typical computer assisted tasks. In turn, this means that mobile applications will continue on a trajectory to becoming more complex, and the primary focus of millions of developers worldwide. In order to properly create and maintain these apps developers will need support, especially with regard to the prompt confirmation and resolution of bug reports. Unfortunately, current issue tracking systems typically only implement collection of coarse grained natural language descriptions, and lack features to facilitate reporters including important information in their reports. This illustrates the lexical information gap that exists in current bug reporting systems for mobile and GUI-based apps. This paper outlines promising preliminary work towards addressing this problem and proposes a comprehensive research program which aims to implement new bug reporting mechanisms and examine the impact that they might have on related software maintenance tasks.

[40]
Title: WiLAD: Wireless Localisation through Anomaly Detection
Journal-ref: IEEE GLOBECOM 2017
Subjects: Networking and Internet Architecture (cs.NI)

We propose a new approach towards RSS (Received Signal Strength) based wireless localisation for scenarios where, instead of absolute positioning of an object, only the information whether an object is inside or outside of a specific area is required. This is motivated through a number of applications including, but not limited to, a) security: detecting whether an object is removed from a secure location, b) wireless sensor networks: detecting sensor movements outside of a network area, and c) computational behaviour analytics: detecting customers leaving a retail store. The result of such detection systems can naturally be utilised in building a higher level contextual understanding of a system or user behaviours. We use a supervised learning method to overcome issues related to RSS based localisation systems including multipath fading, shadowing, and incorrect model parameters (as in unsupervised methods). Moreover, to reduce the cost of collecting training data, we employ a detection method called One-Class SVM (OC-SVM) which requires only one class of data (positive data, or target class data) for training. We derive a mathematical approximation of accuracy which utilises the characteristics of wireless signals as well as OC-SVM. Based on this we then propose a novel mathematical formula to find optimal placement of devices. This enables us to optimize the placement without performing any costly experiments or simulations. We validate our proposed mathematical framework based on simulated and real experiments.

[41]
Title: FUSION: A Tool for Facilitating and Augmenting Android Bug Reporting
Comments: 4 pages, Accepted in the Proceedings of 38thACM/IEEE International Conference on Software Engineering (ICSE'16)
Subjects: Software Engineering (cs.SE)

As the popularity of mobile smart devices continues to climb the complexity of "apps" continues to increase, making the development and maintenance process challenging. Current bug tracking systems lack key features to effectively support construction of reports with actionable information that directly lead to a bug's resolution. In this demo we present the implementation of a novel bug reporting system, called Fusion, that facilitates users including reproduction steps in bug reports for mobile apps. Fusion links user-provided information to program artifacts extracted through static and dynamic analysis performed before testing or release. Results of preliminary studies demonstrate that Fusion both effectively facilitates reporting and allows for more reliable reproduction of bugs from reports compared to traditional issue tracking systems by presenting more detailed contextual app information. Tool website: www.fusion-android. com Video url: https://youtu.be/AND9h0ElxRg

[42]
Title: PTB-TIR: A Thermal Infrared Pedestrian Tracking Benchmark
Authors: Qiao Liu, Zhenyu He
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Thermal infrared (TIR) pedestrian tracking is one of the most important components in numerous applications of computer vision, which has a major advantage: it can track the pedestrians in total darkness. How to evaluate the TIR pedestrian tracker fairly on a benchmark dataset is significant for the development of this field. However, there is no a benchmark dataset. In this paper, we develop a TIR pedestrian tracking dataset for the TIR pedestrian tracker evaluation. The dataset includes 60 thermal sequences with manual annotations. Each sequence has nine attribute labels for the attribute based evaluation. In addition to the dataset, we carry out the large-scale evaluation experiments on our benchmark dataset using nine public available trackers. The experimental results help us to understand the strength and weakness of these trackers. What's more, in order to get insight into the TIR pedestrian tracker more sufficiently, we divide a tracker into three components: feature extractor, motion model, and observation model. Then, we conduct three comparison experiments on our benchmark dataset to validate how each component affects the tracker's performance. The findings of these experiments provide some guidelines for future research.

[43]
Title: On Efficiently Detecting Overlapping Communities over Distributed Dynamic Graphs
Comments: A short version of this paper will be published as ICDE'2018 poster
Subjects: Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)

Modern networks are of huge sizes as well as high dynamics, which challenges the efficiency of community detection algorithms. In this paper, we study the problem of overlapping community detection on distributed and dynamic graphs. Given a distributed, undirected and unweighted graph, the goal is to detect overlapping communities incrementally as the graph is dynamically changing. We propose an efficient algorithm, called \textit{randomized Speaker-Listener Label Propagation Algorithm} (rSLPA), based on the \textit{Speaker-Listener Label Propagation Algorithm} (SLPA) by relaxing the probability distribution of label propagation. Besides detecting high-quality communities, rSLPA can incrementally update the detected communities after a batch of edge insertion and deletion operations. To the best of our knowledge, rSLPA is the first algorithm that can incrementally capture the same communities as those obtained by applying the detection algorithm from the scratch on the updated graph. Extensive experiments are conducted on both synthetic and real-world datasets, and the results show that our algorithm can achieve high accuracy and efficiency at the same time.

[44]
Title: Uplink Coverage Performance of an Underlay Drone Cell for Temporary Events
Comments: This work has been submitted to 2018 IEEE International Conference on Communications Workshops (ICC Workshops): Integrating UAVs into 5G
Subjects: Information Theory (cs.IT)

Using a drone as an aerial base station (ABS) to provide coverage to users on the ground is envisaged as a promising solution for beyond fifth generation (beyond-5G) wireless networks. While the literature to date has examined downlink cellular networks with ABSs, we consider an uplink cellular network with an ABS. Specifically, we analyze the use of an underlay ABS to provide coverage for a temporary event, such as a sporting event or a concert in a stadium. Using stochastic geometry, we derive the analytical expressions for the uplink coverage probability of the terrestrial base station (TBS) and the ABS. The results are expressed in terms of (i) the Laplace transforms of the interference power distribution at the TBS and the ABS and (ii) the distance distribution between the ABS and an independently and uniformly distributed (i.u.d.) ABS-supported user equipment and between the ABS and an i.u.d. TBS-supported user equipment. The accuracy of the analytical results is verified by Monte Carlo simulations. Our results show that varying the ABS height leads to a trade-off between the uplink coverage probability of the TBS and the ABS. In addition, assuming a quality of service of 90% at the TBS, an uplink coverage probability of the ABS of over 85% can be achieved, with the ABS deployed at or below its optimal height of typically between 250-500 m for the considered setup.

[45]
Title: Toward Scalable Verification for Safety-Critical Deep Networks
Comments: Accepted for presentation at SysML 2018
Subjects: Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO)

The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the state-of-the-art is limited to small systems. In this work-in-progress report we give an overview of our work on mitigating this difficulty, by pursuing two complementary directions: devising scalable verification techniques, and identifying design choices that result in deep learning systems that are more amenable to verification.

[46]
Subjects: Information Theory (cs.IT)

We study communication in the presence of a jamming adversary where quadratic power constraints are imposed on the transmitter and the jammer. The jamming signal is assumed to be a function of the codebook, and a noncausal but noisy observation of the transmitted codeword. For a certain range of the noise-to-signal ratios (NSRs) of the transmitter and the jammer, we are able to characterize the capacity of this channel under deterministic encoding. For the remaining NSR regimes, we determine the capacity under the assumption of a small amount of common randomness (at most $\mathcal O(\log(n))$ bits in one sub-regime, and at most $\mathcal O(n^2)$ bits in the other sub-regime) available to the encoder-decoder pair. Our proof techniques involve a novel myopic list-decoding result for achievability and a Plotkin-type push attack for the converse in a subregion of the NSRs, which may be of independent interest.

[47]
Title: On a Generic Security Game Model
Journal-ref: Shandilya, V. and Shiva, S. (2017) On a Generic Security Game Model. Int. J. Communications , Network and System Sciences , 10, 142-172
Subjects: Cryptography and Security (cs.CR); Computer Science and Game Theory (cs.GT)

To protect the systems exposed to the Internet against attacks, a security system with the capability to engage with the attacker is needed. There have been attempts to model the engagement/interactions between users, both benign and malicious, and network administrators as games. Building on such works, we present a game model which is generic enough to capture various modes of such interactions. The model facilitates stochastic games with imperfect information. The information is imperfect due to erroneous sensors leading to incorrect perception of the current state by the players. To model this error in perception distributed over other multiple states, we use Euclidean distances between the outputs of the sensors. We build a 5-state game to represent the interaction of the administrator with the user. The states correspond to 1) the user being out of the system in the Internet, and after logging in to the system; 2) having low privileges; 3) having high privileges; 4) when he successfully attacks and 5) gets trapped in a honeypot by the administrator. Each state has its own action set. We present the game with a distinct perceived action set corresponding to each distinct information set of these states. The model facilitates stochastic games with imperfect information. The imperfect information is due to erroneous sensors leading to incorrect perception of the current state by the players. To model this error in perception distributed over the states, we use Euclidean distances between outputs of the sensors. A numerical simulation of an example game is presented to show the evaluation of rewards to the players and the preferred strategies. We also present the conditions for formulating the strategies when dealing with more than one attacker and making collaborations.

[48]
Title: 3D CNN-based classification using sMRI and MD-DTI images for Alzheimer disease studies
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Computer-aided early diagnosis of Alzheimers Disease (AD) and its prodromal form, Mild Cognitive Impairment (MCI), has been the subject of extensive research in recent years. Some recent studies have shown promising results in the AD and MCI determination using structural and functional Magnetic Resonance Imaging (sMRI, fMRI), Positron Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) modalities. Furthermore, fusion of imaging modalities in a supervised machine learning framework has shown promising direction of research.
In this paper we first review major trends in automatic classification methods such as feature extraction based methods as well as deep learning approaches in medical image analysis applied to the field of Alzheimer's Disease diagnostics. Then we propose our own algorithm for Alzheimer's Disease diagnostics based on a convolutional neural network and sMRI and DTI modalities fusion on hippocampal ROI using data from the Alzheimers Disease Neuroimaging Initiative (ADNI) database (this http URL). Comparison with a single modality approach shows promising results. We also propose our own method of data augmentation for balancing classes of different size and analyze the impact of the ROI size on the classification results as well.

[49]
Title: WYFIWYG: Investigating Effective User Support in Aerial Videography
Subjects: Human-Computer Interaction (cs.HC)

Tools for quadrotor trajectory design have enabled single videographers to create complex aerial video shots that previously required dedicated hardware and several operators. We build on this prior work by studying film-maker's working practices which informed a system design that brings expert workflows closer to end-users. For this purpose, we propose WYFIWYG, a new quadrotor camera tool which (i) allows to design a video solely via specifying its frames, (ii) encourages the exploration of the scene prior to filming and (iii) allows to continuously frame a camera target according to compositional intentions. Furthermore, we propose extensions to an existing algorithm, generating more intuitive angular camera motions and producing spatially and temporally smooth trajectories. Finally, we conduct a user study where we evaluate how end-users work with current videography tools. We conclude by summarizing the findings of work as implications for the design of UIs and algorithms of quadrotor camera tools.

[50]
Title: Privacy-preserving Data Splitting: A Combinatorial Approach
Comments: 22 pages, 5 tables, 1 figure
Subjects: Cryptography and Security (cs.CR)

Privacy-preserving data splitting is a technique that aims to protect data privacy by storing different fragments of data in different locations. In this work we give a new combinatorial formulation to the data splitting problem. We see the data splitting problem as a purely combinatorial problem, in which we have to split data attributes into different fragments in a way that satisfies certain combinatorial properties derived from processing and privacy constraints. Using this formulation, we develop new combinatorial and algebraic techniques to obtain solutions to the data splitting problem. We present an algebraic method which builds an optimal data splitting solution by using Gr\"{o}bner bases. Since this method is not efficient in general, we also develop a greedy algorithm for finding solutions that are not necessarily minimal sized.

[51]
Title: Code Constructions for Distributed Storage With Low Repair Bandwidth and Low Repair Complexity
Subjects: Information Theory (cs.IT)

We present the construction of a family of erasure correcting codes for distributed storage systems that achieve low repair bandwidth and low repair complexity. The construction is based on two classes of codes, where the primary goal of the first class of codes is to achieve a good fault tolerance, while the second class of codes aims at reducing the repair bandwidth and the repair complexity. The repair procedure is a two-step procedure where parts of the failed node are repaired in the first step using the first code. The downloaded symbols during the first step are cached in the memory and used to repair the remaining erased data symbols at no additional read cost during the second step of the repair process. The first class of codes is based on maximum distance separable (MDS) codes modified using piggybacks, while the second class of codes is designed to reduce the number of additional symbols that need to be downloaded to repair the remaining erased symbols. We show that the proposed codes achieve better repair bandwidth compared to MDS, Piggyback, and local reconstruction codes, while a better repair complexity is achieved when compared to MDS, Zigzag, and Piggyback codes.

[52]
Title: A universality theorem for allowable sequences with applications
Subjects: Computational Geometry (cs.CG)

Order types are a well known abstraction of combinatorial properties of a point set. By Mn\"ev's universality theorem for each semi-algebraic set $V$ there is an order type with a realization space that is \emph{stably equivalent} to $V$. We consider realization spaces of \emph{allowable sequences}, a refinement of order types. We show that the realization spaces of allowable sequences are \emph{universal} and consequently deciding the realizability is complete in the \emph{existential theory of the reals} (\ER). This result holds even if the realization space of the order type induced by the allowable sequence is non-empty. Furthermore, we argue that our result is a useful tool for further geometric reductions. We support this by giving \ER-hardness proofs for the realizability of abstract convex geometries and for the recognition problem of visibility graphs of polygons with holes using the hardness result for allowable sequences. This solves two longstanding open problems.

[53]
Title: Some model theory for the modal $μ$-calculus: syntactic characterisations of semantic properties
Subjects: Logic in Computer Science (cs.LO)

This paper contributes to the theory of the modal $\mu$-calculus by proving some model-theoretic results. More in particular, we discuss a number of semantic properties pertaining to formulas of the modal $\mu$-calculus. For each of these properties we provide a corresponding syntactic fragment,in the sense that a $\mu$-formula $\xi$ has the given property iff it is equivalent to a formula $\xi'$ in the corresponding fragment. Since this formula $\xi'$ will always be effectively obtainable from $\xi$, as a corollary, for each of the properties under discussion, we prove that it is decidable in elementary time whether a given $\mu$-calculus formula has the property or not.
The properties that we study all concern the way in which the meaning of a formula $\xi$ in a model depends on the meaning of a single, fixed proposition letter $p$. For example, consider a formula $\xi$ which is monotone in $p$; such a formula a formula $\xi$ is called continuous (respectively, fully additive), if in addition it satisfies the property that, if $\xi$ is true at a state $s$ then there is a finite set (respectively, a singleton set) $U$ such that $\xi$ remains true at $s$ if we restrict the interpretation of $p$ to the set $U$. Each of the properties that we consider is, in a similar way, associated with one of the following special kinds of subset of a tree model: singletons, finite sets, finitely branching subtrees, noetherian subtrees (i.e., without infinite paths), and branches.
Our proofs for these characterization results will be automata-theoretic in nature; we will see that the effectively defined maps on formulas are in fact induced by rather simple transformations on modal automata. Thus our results can also be seen as a contribution to the model theory of modal automata.

[54]
Title: On-Chip CNN Accelerator for Image Super-Resolution
Comments: 7 pages, submitted to a conference for review
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Hardware Architecture (cs.AR)

To implement convolutional neural networks (CNN) in hardware, the state-of-the-art CNN accelerators pipeline computation and data transfer stages using an off-chip memory and simultaneously execute them on the same timeline. However, since a large amount of feature maps generated during the operation should be transmitted to the off-chip memory, the pipeline stage length is determined by the off-chip data transfer stage. Fusion architectures that can fuse multiple layers have been proposed to solve this problem, but applications such as super-resolution (SR) require a large amount of an on-chip memory because of the high resolution of the feature maps. In this paper, we propose a novel on-chip CNN accelerator for SR to optimize the CNN dataflow in the on-chip memory. First, the convolution loop optimization technique is proposed to prevent using a frame buffer. Second, we develop a combined convolutional layer processor to reduce the buffer size used to store the feature maps. Third, we explore how to perform low-cost multiply-and-accumulate operations in the deconvolutional layer used in SR. Finally, we propose a two-stage quantization algorithm to select the optimized hardware size for the limited number of DSPs to implement the on-chip CNN accelerator. We evaluate our proposed accelerator with FSRCNN, which is most popular as the CNN-based SR algorithm. Experimental results show that the proposed accelerator requires 9.21ms to achieve an output image with the 2560x1440 pixel resolution, which is 36 times faster than the conventional method. In addition, we reduce the on-chip memory usage and DSP usage by 4 times and 1.44 times, respectively, compared to the conventional methods.

[55]
Title: Layered TPOT: Speeding up Tree-based Pipeline Optimization
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI)

With the demand for machine learning increasing, so does the demand for tools which make it easier to use. Automated machine learning (AutoML) tools have been developed to address this need, such as the Tree-Based Pipeline Optimization Tool (TPOT) which uses genetic programming to build optimal pipelines. We introduce Layered TPOT, a modification to TPOT which aims to create pipelines equally good as the original, but in significantly less time. This approach evaluates candidate pipelines on increasingly large subsets of the data according to their fitness, using a modified evolutionary algorithm to allow for separate competition between pipelines trained on different sample sizes. Empirical evaluation shows that, on sufficiently large datasets, Layered TPOT indeed finds better models faster.

[56]
Title: Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors
Subjects: Human-Computer Interaction (cs.HC)

Users' visual attention is highly fragmented during mobile interactions but the erratic nature of these attention shifts currently limits attentive user interfaces to adapt after the fact, i.e. after shifts have already happened, thereby severely limiting the adaptation capabilities and user experience. To address these limitations, we study attention forecasting -- the challenging task of predicting whether users' overt visual attention (gaze) will shift between a mobile device and environment in the near future or how long users' attention will stay in a given location. To facilitate the development and evaluation of methods for attention forecasting, we present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). As a first step towards a fully-fledged attention forecasting interface, we further propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate the feasibility of forecasting bidirectional attention shifts between the device and the environment as well as for predicting the first and total attention span on the device and environment using our method. We further study the impact of different sensors and feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.

[57]
Title: Time-Slotted Scheduling Schemes for Multi-hop Concurrent Transmission in WPANs with Directional Antenna
Comments: 11 pages, 10 figures, published in ETRI Journal, Volume 36, Number 3, pp. 374-384, June 2014
Journal-ref: ETRI Journal, vol. 36, no. 3, pp. 374-384, 2014
Subjects: Networking and Internet Architecture (cs.NI)

To achieve high-speed (giga-bit) connectivity for short-range wireless multimedia applications, the millimeter-wave (mmWave) wireless personal area networks with directional antennas are gaining increased interest. Due to the use of directional antennas and mmWave communications, the probability of non-interfering transmissions increases in a localized region. Network throughput can be increased immensely by the concurrent time allocation of non-interfering transmissions. The problem of finding optimum time allocation for concurrent transmissions is an NP-hard problem. In this paper, we propose two enhanced versions of previously proposed multi-hop concurrent transmission (MHCT) schemes. To increase network capacity, the proposed schemes efficiently make use of the free holes in the time-allocation map of the MHCT scheme; thus, making it more compact.

[58]
Title: Reconstruction Codes for DNA Sequences with Uniform Tandem-Duplication Errors
Subjects: Information Theory (cs.IT)

DNA as a data storage medium has several advantages, including far greater data density compared to electronic media. We propose that schemes for data storage in the DNA of living organisms may benefit from studying the reconstruction problem, which is applicable whenever multiple reads of noisy data are available. This strategy is uniquely suited to the medium, which inherently replicates stored data in multiple distinct ways, caused by mutations. We consider noise introduced solely by uniform tandem-duplication, and utilize the relation to constant-weight integer codes in the Manhattan metric. By bounding the intersection of the cross-polytope with hyperplanes, we prove the existence of reconstruction codes with greater capacity than known error-correcting codes, which we can determine analytically for any set of parameters.

[59]
Title: Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations
Comments: The 31st Annual Conference on Neural Information Processing (NIPS) - Workshop on Learning Disentangled Features: from Perception to Control, Long Beach, CA, December 2017
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Learning (cs.LG); Machine Learning (stat.ML)

We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as the representation space becomes less entangled. We explore the structure of the representation space by interpolating between sentences, which yields interesting pseudo-English sentences, many of which have recognizable syntactic structure. Lastly, we point out an interesting property of our models: The difference-vector between two sentences can be added to change a third sentence with similar features in a meaningful way.

[60]
Title: In-RDBMS Hardware Acceleration of Advanced Analytics
Subjects: Databases (cs.DB); Hardware Architecture (cs.AR); Learning (cs.LG)

The data revolution is fueled by advances in several areas, including databases, high-performance computer architecture, and machine learning. Although timely, there is a void of solutions that brings these disjoint directions together. This paper sets out to be the initial step towards such a union. The aim is to devise a solution for the in-Database Acceleration of Advanced Analytics (DAnA). DAnA empowers database users to leap beyond traditional data summarization techniques and seamlessly utilize hardware-accelerated machine learning. Deploying specialized hardware, such as FPGAs, for in-database analytics currently requires hand-designing the hardware and manually routing the data. Instead, DAnA automatically maps a high-level specification of in-database analytics queries to the FPGA accelerator. The accelerator implementation is generated from a User Defined Function (UDF), expressed as part of a SQL query in a Python-embedded Domain Specific Language (DSL). To realize efficient in-database integration, DAnA accelerators contain a novel hardware structure, Striders, that directly interface with the buffer pool of the database. DAnA obtains the schema and page layout information from the database catalog to configure the Striders. In turn, Striders extract, cleanse, and process the training data tuples, which are consumed by a multi-threaded FPGA engine that executes the analytics algorithm. We integrated DAnA with PostgreSQL to generate hardware accelerators for a range of real-world and synthetic datasets running diverse ML algorithms. Results show that DAnA-enhanced PostgreSQL provides, on average, 11.3x end-to-end speedup than MADLib and 5.4x faster than multi-threaded MADLib running on Greenplum. DAnA provides these benefits while hiding the complexity of hardware design from data scientists and allowing them to express the algorithm in 30-60 lines of Python.

[61]
Title: rcss: Subgradient and duality approach for dynamic programming
Authors: Juri Hinz, Jeremy Yee
Subjects: Mathematical Software (cs.MS)

This short paper gives an introduction to the \emph{rcss} package. The R package \emph{rcss} provides users with a tool to approximate the value functions in the Bellman recursion using convex piecewise linear functions formed using operations on tangents. A pathwise method is then used to gauge the quality of the numerical results.

[62]
Title: Multi-measures fusion based on multi-objective genetic programming for full-reference image quality assessment
Subjects: Multimedia (cs.MM)

In this paper, we exploit the flexibility of multi-objective fitness functions, and the efficiency of the model structure selection ability of a standard genetic programming (GP) with the parameter estimation power of classical regression via multi-gene genetic programming (MGGP), to propose a new fusion technique for image quality assessment (IQA) that is called Multi-measures Fusion based on Multi-Objective Genetic Programming (MFMOGP). This technique can automatically select the most significant suitable measures, from 16 full-reference IQA measures, used in aggregation and finds weights in a weighted sum of their outputs while simultaneously optimizing for both accuracy and complexity. The obtained well-performing fusion of IQA measures are evaluated on four largest publicly available image databases and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other state-of-the-art recently developed fusion approaches.

[63]
Title: Constrained locating arrays for combinatorial interaction testing
Subjects: Software Engineering (cs.SE); Discrete Mathematics (cs.DM)

This paper introduces the notion of Constrained Locating Arrays (CLAs), mathematical objects which can be used for fault localization in a testing process for information systems. CLAs extend ordinary locating arrays to make them applicable to testing of systems that have constraints on test parameters. Such constraints are common in real-world systems; thus CLA enhances the applicability of locating arrays to practical testing problems.

[64]
Title: Optimal Weighting for Exam Composition
Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI)

A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were 30 multiple choice questions worth 3 points, 15 true/false with explanation questions worth 4 points, and 5 analytical exercises worth 10 points. We describe a novel framework where algorithms from machine learning are used to modify the exam question weights in order to optimize the exam scores, using the overall class grade as a proxy for a student's true ability. We show that significant error reduction can be obtained by our approach over standard weighting schemes, and we make several new observations regarding the properties of the "good" and "bad" exam questions that can have impact on the design of improved future evaluation methods.

[65]
Title: Analysis of the Relation between Artificial Intelligence and the Internet from the Perspective of Brain Science
Subjects: Computers and Society (cs.CY)

Artificial intelligence (AI) like deep learning, cloud AI computation has been advancing at a rapid pace since 2014. There is no doubt that the prosperity of AI is inseparable with the development of the Internet. However, there has been little attention to the link between AI and the internet. This paper explores them with brain insights mainly from four views:1) How is the general relation between artificial intelligence and Internet of Things, cloud computing, big data and Industrial Internet from the perspective of brain science. 2) Construction of a new AI system model with the Internet and brain science.

[66]
Title: Engagement in Foundational Computer Science Courses Through Supplementary Content for Algorithms
Subjects: Computers and Society (cs.CY)

Engaging students in teaching foundational Computer Science concepts is vital for the student's continual success in more advanced topics in the field. An idea of a series of Jupyter notebooks was conceived as a way of using Bloom's Taxonomy to reinforce concepts taught in an introductory algorithms class. The idea of the notebook is to keep the student's engaged in the lesson and in turn motivate them to persevere through the end of the course.

[67]
Title: Deep Learning for Fatigue Estimation on the Basis of Multimodal Human-Machine Interactions
Comments: 12 pages, 10 figures, 1 table; presented at XXIX IUPAP Conference in Computational Physics (CCP2017) July 9-13, 2017, Paris, University Pierre et Marie Curie - Sorbonne (this https URL)
Subjects: Computers and Society (cs.CY); Human-Computer Interaction (cs.HC); Learning (cs.LG)

The new method is proposed to monitor the level of current physical load and accumulated fatigue by several objective and subjective characteristics. It was applied to the dataset targeted to estimate the physical load and fatigue by several statistical and machine learning methods. The data from peripheral sensors (accelerometer, GPS, gyroscope, magnetometer) and brain-computing interface (electroencephalography) were collected, integrated, and analyzed by several statistical and machine learning methods (moment analysis, cluster analysis, principal component analysis, etc.). The hypothesis 1 was presented and proved that physical activity can be classified not only by objective parameters, but by subjective parameters also. The hypothesis 2 (experienced physical load and subsequent restoration as fatigue level can be estimated quantitatively and distinctive patterns can be recognized) was presented and some ways to prove it were demonstrated. Several "physical load" and "fatigue" metrics were proposed. The results presented allow to extend application of the machine learning methods for characterization of complex human activity patterns (for example, to estimate their actual physical load and fatigue, and give cautions and advice).

[68]
Title: Effects of Home Resources and School Environment on Eighth-Grade Mathematics Achievement in Taiwan
Authors: Jiaqi Cai
Subjects: Computers and Society (cs.CY); Physics Education (physics.ed-ph)

Over the past decades, researchers have explored the relationship among home resources, school environment, and students' mathematics achievement in a large amount of studies. Many of them suggested that rich home resources for learning were related to higher average academic achievement. Some also suggested that the home background was closely associated with the learning environment, and therefore, influenced students' achievements. Thus, this study hypothesized that students who own more home resources would perform better than students who possess fewer resources and that schools that have more socioeconomically advantaged students, located in high-income neighborhoods, and possess more instructional resources would have better mathematics performance. The study focuses on eighth graders in Taiwan and explores the variance in mathematics achievement of students as a function of student and school level differences.

[69]
Title: A taxonomy of video lecture styles
Subjects: Computers and Society (cs.CY)

Many educational organizations are employing instructional video in their pedagogy, but there is limited understanding of the possible presentation styles. In practice, the presentation style of video lectures ranges from a direct recording of classroom teaching with a stationary camera and screencasts with voice-over, up to highly elaborate video post-production. Previous work evaluated the effectiveness of several presentation styles, but there has not been any consistent taxonomy, which would have made comparisons and meta-analyses possible. In this article, we surveyed the research literature and we examined contemporary video-based courses, which have been produced by diverse educational organizations and teachers across various academic disciplines. We organized video lectures in two dimensions according to the level of human presence and according to the type of instructional media. In addition to organizing existing video lectures in a comprehensive way, the proposed taxonomy offers a design space that facilitates the choice of a suitable presentation style, as well as the preparation of new ones.

[70]
Title: Big Data and Learning Analytics in Higher Education: Demystifying Variety, Acquisition, Storage, NLP and Analytics
Authors: Amal S. Alblawi
Comments: 6 pages , 2017 IEEE Conference on Big Data and Analytics (ICBDA)
Subjects: Computers and Society (cs.CY)

Different sectors have sought to take advantage of opportunities to invest in big data analytics and Natural language processing, in order to improve their productivity and competitiveness. Current challenges facing the higher education sector include a rapidly changing and evolving environment, which necessitates the development of new ways of thinking. Interest has therefore increased in analytics as part of the solution to many issues in higher education, including the rate of student attrition and learner support. This study provides a comprehensive discussion of big data, learning analytics and use of NLP in higher education. In addition, it introduces an integrated learning analytics solution leveraging a distributed technology system capable of supporting academic authorities and advisors at educational institutions in making decisions concerning individual students.

[71]
Title: Lab Based Curriculum for CIS and Related Technology
Subjects: Computers and Society (cs.CY)

The Computer Information System (CIS) is information and communication technology in support of business processes. In this paper, we present a typical undergraduate computer information system curriculum examining the degree of lab intensity and its effect on the course efficacy. A CIS program is usually part of the school of business as it is in support of business processes. We also explore the differences between a CIS curriculum and other computer related technology courses, such as Information Technology (IT), Computer Science (CS), and Software Engineering (SE). The curriculum is composed of several elements such as content and sequence of subjects, classrooms equipped with computer projection, internet, and local network access, and appropriate computing and software infrastructure. We will focus on the importance and adequacy of labs for the CIS curriculum. The proposed CIS curriculum works for a 4-year as well as a 3-year program. This paper provides a recommendation for local and Federal Accreditation agencies and curriculum committees.

[72]
Title: Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour
Subjects: Computers and Society (cs.CY)

Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.

[73]
Title: Tamil Open-Source Landscape - Opportunities and Challenges
Subjects: Computers and Society (cs.CY)

We report in this paper, Tamil open-source software community is a vibrant place with software developers, font designers, translators, voice-over artists, and general user testers, who come together for love of their language, and promotion of critical thinking, and modern language usage in Tamil. We identify a need for institutional support at various stages from grooming software developers in Tamil, to marketing platform for Tamil software. There is bright future for tamil software if we will meet challenges it brings with it.

[74]
Title: RED-Net: A Recurrent Encoder-Decoder Network for Video-based Face Alignment
Comments: International Journal of Computer Vision. arXiv admin note: text overlap with arXiv:1608.05477
Subjects: Computer Vision and Pattern Recognition (cs.CV)

We propose a novel method for real-time face alignment in videos based on a recurrent encoder-decoder network model. Our proposed model predicts 2D facial point heat maps regularized by both detection and regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model, instead of relying on traditional cascaded model ensembles. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features. We show that such feature disentangling yields better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state of the art and several variations of our method in standard datasets.

[75]
Title: Approximate Early Output Asynchronous Adders Based on Dual-Rail Data Encoding and 4-Phase Return-to-Zero and Return-to-One Handshaking
Journal-ref: Intl. Jour. of Ckts., Sys. and Signal Processing, vol. 11, pp. 445-453, 2017
Subjects: Hardware Architecture (cs.AR)

[76]
Title: Toward Stronger Robustness of Network Controllability: A Snapback Network Model
Subjects: Systems and Control (cs.SY)

A new complex network model, called q-snapback network, is introduced. Basic topological characteristics of the network, such as degree distribution, average path length, clustering coefficient and Pearson correlation coefficient, are evaluated. The typical 4-motifs of the network are simulated. The robustness of both state and structural controllabilities of the network against targeted and random node- and edge-removal attacks, with comparisons to the multiplex congruence network and the generic scale-free network, are presented. It is shown that the q-snapback network has the strongest robustness of controllabilities due to its advantageous inherent structure with many chain- and loop-motifs.

[77]
Title: Invariants of multidimensional time series based on their iterated-integral signature
Subjects: Computer Vision and Pattern Recognition (cs.CV); Representation Theory (math.RT)

We introduce a novel class of features for multidimensional time series, that are invariant with respect to transformations of the ambient space. The general linear group, the group of rotations and the group of permutations of the axes are considered. The starting point for their construction is Chen's iterated-integral signature.

[78]
Title: Overcoming the vanishing gradient problem in plain recurrent networks
Subjects: Neural and Evolutionary Computing (cs.NE)

Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs. This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs. We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks. The RINs demonstrate competitive performance and converge faster in all tasks. Notably, small RIN models produce 12%--67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset.

[79]
Title: Challenges of the Dynamic Detection of Functionally Similar Code Fragments
Journal-ref: Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR), IEEE, 2012
Subjects: Software Engineering (cs.SE)

Classic clone detection approaches are hardly capable of finding redundant code that has been developed independently, i.e., is not the result of copy&paste. To automatically detect such functionally similar code of independent origin, we experimented with a dynamic detection approach that applies random testing to selected chunks of code similar to Jiang&Su's approach. We found that such an approach faces several limitations in its application to diverse Java systems. This paper details on our insights regarding these challenges of dynamic detection of functionally similar code fragments. Our findings support a substantiated discussion on detection approaches and serve as a starting point for future research.

[80]
Title: Anatomy of an online misinformation network
Comments: 28 pages, 11 figures, submitted to PLOS ONE
Subjects: Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)

Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy filters public tweets that include links to unverified claims or fact-checking articles. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.

[81]
Title: An Iterative Closest Point Method for Unsupervised Word Translation
Subjects: Learning (cs.LG)

Unsupervised word translation from non-parallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Our simple linear method is able to achieve better or equal performance to recent state-of-the-art deep adversarial approaches and typically does a little better than the supervised baseline. Our method is also efficient, easy to parallelize and interpretable.

[82]
Title: Latitude: A Model for Mixed Linear-Tropical Matrix Factorization
Comments: 14 pages, 6 figures. To appear in 2018 SIAM International Conference on Data Mining (SDM '18). For the source code, see this https URL
Subjects: Learning (cs.LG)

Nonnegative matrix factorization (NMF) is one of the most frequently-used matrix factorization models in data analysis. A significant reason to the popularity of NMF is its interpretability and the parts of whole' interpretation of its components. Recently, max-times, or subtropical, matrix factorization (SMF) has been introduced as an alternative model with equally interpretable `winner takes it all' interpretation. In this paper we propose a new mixed linear--tropical model, and a new algorithm, called Latitude, that combines NMF and SMF, being able to smoothly alternate between the two. In our model, the data is modeled using the latent factors and latent parameters that control whether the factors are interpreted as NMF or SMF features, or their mixtures. We present an algorithm for our novel matrix factorization. Our experiments show that our algorithm improves over both baselines, and can yield interpretable results that reveal more of the latent structure than either NMF or SMF alone.

[83]
Title: A Formalization for Specifying and Implementing Correct Pull-Stream Modules
Subjects: Programming Languages (cs.PL)

Pull-stream is a JavaScript demand-driven functional design pattern based on callback functions that enables the creation and easy composition of independent modules that are used to create streaming applications. It is used in popular open source projects and the community around it has created over a hundred compatible modules. While the description of the pull-stream design pattern may seem simple, it does exhibit complicated termination cases. Despite the popularity and large uptake of the pull-stream design pattern, there was no existing formal specification that could help programmers reason about the correctness of their implementations.
Thus, the main contribution of this paper is to provide a formalization for specifying and implementing correct pull-stream modules based on the following: (1) we show the pull-stream design pattern is a form of declarative concurrent programming; (2) we present an event-based protocol language that supports our formalization, independently of JavaScript; (3) we provide the first precise and explicit definition of the expected sequences of events that happen at the interface of two modules, which we call the pull-stream protocol; (4) we specify reference modules that exhibit the full range of behaviors of the pull-stream protocol; (5) we validate our definitions against the community expectations by testing the existing core pull-stream modules against them and identify unspecified behaviors in existing modules.
Our approach helps to better understand the pull-stream protocol, to ensure interoperability of community modules, and to concisely and precisely specify new pull-stream abstractions in papers and documentation.

[84]
Title: Fine-tuned Language Models for Text Classification
Subjects: Computation and Language (cs.CL); Learning (cs.LG); Machine Learning (stat.ML)

Transfer learning has revolutionized computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Fine-tuned Language Models (FitLaM), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a state-of-the-art language model. Our method significantly outperforms the state-of-the-art on five text classification tasks, reducing the error by 18-24% on the majority of datasets. We open-source our pretrained models and code to enable adoption by the community.

[85]
Title: LCD: Low Latency Command Dissemination for A Platoon of Vehicles
Comments: 8 pages, 5 figures, accepted in IEEE International Conference on Communications (ICC), 2018
Subjects: Performance (cs.PF); Systems and Control (cs.SY)

In a vehicular platoon, a lead vehicle that is responsible for managing the platoon's moving directions and velocity periodically disseminates control commands to following vehicles based on vehicle-to-vehicle communications. However, reducing command dissemination latency with multiple vehicles while ensuring successful message delivery to the tail vehicle is challenging. We propose a new linear dynamic programming algorithm using backward induction and interchange arguments to minimize the dissemination latency of the vehicles. Furthermore, a closed form of dissemination latency in vehicular platoon is obtained by utilizing Markov chain with M/M/1 queuing model. Simulation results confirm that the proposed dynamic programming algorithm improves the dissemination rate by at least 50.9%, compared to similar algorithms in the literature. Moreover, it also approximates the best performance with the maximum gap of up to 0.2 second in terms of latency.

[86]
Title: Topic Lifecycle on Social Networks: Analyzing the Effects of Semantic Continuity and Social Communities
Comments: 12 pages, 5 figures (13 figures if sub-figures are counted separately), To Appear in ECIR 2018
Subjects: Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)

Topic lifecycle analysis on Twitter, a branch of study that investigates Twitter topics from their birth through lifecycle to death, has gained immense mainstream research popularity. In the literature, topics are often treated as one of (a) hashtags (independent from other hashtags), (b) a burst of keywords in a short time span or (c) a latent concept space captured by advanced text analysis methodologies, such as Latent Dirichlet Allocation (LDA). The first two approaches are not capable of recognizing topics where different users use different hashtags to express the same concept (semantically related), while the third approach misses out the user's explicit intent expressed via hashtags. In our work, we use a word embedding based approach to cluster different hashtags together, and the temporal concurrency of the hashtag usages, thus forming topics (a semantically and temporally related group of hashtags).We present a novel analysis of topic lifecycles with respect to communities. We characterize the participation of social communities in the topic clusters, and analyze the lifecycle of topic clusters with respect to such participation. We derive first-of-its-kind novel insights with respect to the complex evolution of topics over communities and time: temporal morphing of topics over hashtags within communities, how the hashtags die in some communities but morph into some other hashtags in some other communities (that, it is a community-level phenomenon), and how specific communities adopt to specific hashtags. Our work is fundamental in the space of topic lifecycle modeling and understanding in communities: it redefines our understanding of topic lifecycles and shows that the social boundaries of topic lifecycles are deeply ingrained with community behavior.

[87]
Title: Private Information Retrieval Through Wiretap Channel II: Privacy Meets Security
Comments: Submitted to IEEE Transactions on Information Theory, January 2018
Subjects: Information Theory (cs.IT); Cryptography and Security (cs.CR)

We consider the problem of private information retrieval through wiretap channel II (PIR-WTC-II). In PIR-WTC-II, a user wants to retrieve a single message (file) privately out of $M$ messages, which are stored in $N$ replicated and non-communicating databases. An external eavesdropper observes a fraction $\mu_n$ (of its choice) of the traffic exchanged between the $n$th database and the user. In addition to the privacy constraint, the databases should encode the returned answer strings such that the eavesdropper learns absolutely nothing about the \emph{contents} of the databases. We aim at characterizing the capacity of the PIR-WTC-II under the combined privacy and security constraints. We obtain a general upper bound for the problem in the form of a max-min optimization problem, which extends the converse proof of the PIR problem under asymmetric traffic constraints. We propose an achievability scheme that satisfies the security constraint by encoding a secret key, which is generated securely at each database, into an artificial noise vector using an MDS code. The user and the databases operate at one of the corner points of the achievable scheme for the PIR under asymmetric traffic constraints such that the retrieval rate is maximized under the imposed security constraint. The upper bound and the lower bound match for the case of $M=2$ and $M=3$ messages, for any $N$, and any $\boldsymbol{\mu}=(\mu_1, \cdots, \mu_N)$.

[88]
Title: Contextual and Position-Aware Factorization Machines for Sentiment Classification
Subjects: Computation and Language (cs.CL)

While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence). Factorization Machine provides a possible approach to learning element-wise interaction for recommender systems, but they are not directly applicable to our task due to the inability to model contexts and word sequences. In this work, we develop two Position-aware Factorization Machines which consider word interaction, context and position information. Such information is jointly encoded in a set of sentiment-oriented word interaction vectors. Compared to traditional word embeddings, SWI vectors explicitly capture sentiment-oriented word interaction and simplify the parameter learning. Experimental results show that while they have comparable performance with state-of-the-art methods for document-level classification, they benefit the snippet/sentence-level sentiment analysis.

[89]
Title: Integrating planning for task-completion dialogue policy learning
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

Training a task-completion dialogue agent with real users via reinforcement learning (RL) could be prohibitively expensive, because it requires many interactions with users. One alternative is to resort to a user simulator, while the discrepancy of between simulated and real users makes the learned policy unreliable in practice. This paper addresses these challenges by integrating planning into the dialogue policy learning based on Dyna-Q framework, and provides a more sample-efficient approach to learn the dialogue polices. The proposed agent consists of a planner trained on-line with limited real user experience that can generate large amounts of simulated experience to supplement with limited real user experience, and a policy model trained on these hybrid experiences. The effectiveness of our approach is validated on a movie-booking task in both a simulation setting and a human-in-the-loop setting.

### Cross-lists for Fri, 19 Jan 18

[90]  arXiv:1801.05894 (cross-list from math.HO) [pdf, other]
Title: Deep Learning: An Introduction for Applied Mathematicians
Subjects: History and Overview (math.HO); Learning (cs.LG); Numerical Analysis (math.NA); Machine Learning (stat.ML)

Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.

[91]  arXiv:1801.05928 (cross-list from math.NT) [pdf, ps, other]
Title: On partitions into squares of distinct integers whose reciprocals sum to 1
Authors: Max A. Alekseyev
Subjects: Number Theory (math.NT); Discrete Mathematics (cs.DM)

In 1963, Graham proved that all integers greater than 77 (but not 77 itself) can be partitioned into distinct positive integers whose reciprocals sum to 1. He further conjectured that for any sufficiently large integer, it can be partitioned into squares of distinct positive integers whose reciprocals sum to 1. In this study, we establish the exact bound for existence of such representations by proving that 8542 is the largest integer with no such partition.

[92]  arXiv:1801.05965 (cross-list from math.LO) [pdf, ps, other]
Title: Complexity of Combinations of Qualitative Constraint Satisfaction Problems
Subjects: Logic (math.LO); Computational Complexity (cs.CC)

The CSP of a first-order theory $T$ is the problem of deciding for a given finite set $S$ of atomic formulas whether $T \cup S$ is satisfiable. Let $T_1$ and $T_2$ be two theories with countably infinite models and disjoint signatures. Nelson and Oppen presented conditions that imply decidability (or polynomial-time decidability) of $\mathrm{CSP}(T_1 \cup T_2)$ under the assumption that $\mathrm{CSP}(T_1)$ and $\mathrm{CSP}(T_2)$ are decidable (or polynomial-time decidable). We show that for a large class of $\omega$-categorical theories $T_1, T_2$ the Nelson-Oppen conditions are not only sufficient, but also necessary for polynomial-time tractability of $\mathrm{CSP}(T_1 \cup T_2)$ (unless P=NP).

[93]  arXiv:1801.05984 (cross-list from eess.SP) [pdf]
Title: Prediction of the Optimal Threshold Value in DF Relay Selection Schemes Based on Artificial Neural Networks
Comments: 6 pages,IEEE INnovations in Intelligent SysTems and Applications (INISTA), 2016 International Symposium on
Subjects: Signal Processing (eess.SP); Neural and Evolutionary Computing (cs.NE)

In wireless communications, the cooperative communication (CC) technology promises performance gains compared to traditional Single-Input Single Output (SISO) techniques. Therefore, the CC technique is one of the nominees for 5G networks. In the Decode-and-Forward (DF) relaying scheme which is one of the CC techniques, determination of the threshold value at the relay has a key role for the system performance and power usage. In this paper, we propose prediction of the optimal threshold values for the best relay selection scheme in cooperative communications, based on Artificial Neural Networks (ANNs) for the first time in literature. The average link qualities and number of relays have been used as inputs in the prediction of optimal threshold values using Artificial Neural Networks (ANNs): Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) networks. The MLP network has better performance from the RBF network on the prediction of optimal threshold value when the same number of neurons is used at the hidden layer for both networks. Besides, the optimal threshold values obtained using ANNs are verified by the optimal threshold values obtained numerically using the closed form expression derived for the system. The results show that the optimal threshold values obtained by ANNs on the best relay selection scheme provide a minimum Bit-Error-Rate (BER) because of the reduction of the probability that error propagation may occur. Also, for the same BER performance goal, prediction of optimal threshold values provides 2dB less power usage, which is great gain in terms of green communicationBER performance goal, prediction of optimal threshold values provides 2dB less power usage, which is great gain in terms of green communication.

[94]  arXiv:1801.06014 (cross-list from eess.SP) [pdf, other]
Title: Image Enhancement and Noise Reduction Using Modified Delay-Multiply-and-Sum Beamformer: Application to Medical Photoacoustic Imaging
Comments: This paper was accepted and presented at Iranian Conference on Electrical Engineering (ICEE) 2017
Journal-ref: Iranian Conference on Electrical Engineering, 2-4 May 2017
Subjects: Signal Processing (eess.SP); Information Theory (cs.IT)

Photoacoustic imaging (PAI) is an emerging biomedical imaging modality capable of providing both high contrast and high resolution of optical and UltraSound (US) imaging. When a short duration laser pulse illuminates the tissue as a target of imaging, tissue induces US waves and detected waves can be used to reconstruct optical absorption distribution. Since receiving part of PA consists of US waves, a large number of beamforming algorithms in US imaging can be applied on PA imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in US imaging. However, make use of DAS beamformer leads to low resolution images and large scale of off-axis signals contribution. To address these problems a new paradigm namely Delay-Multiply-and-Sum (DMAS), which was used as a reconstruction algorithm in confocal microwave imaging for breast cancer detection, was introduced for US imaging. Consequently, DMAS was used in PA imaging systems and it was shown this algorithm results in resolution enhancement and sidelobe degrading. However, in presence of high level of noise the reconstructed image still suffers from high contribution of noise. In this paper, a modified version of DMAS beamforming algorithm is proposed based on DAS inside DMAS formula expansion. The quantitative and qualitative results show that proposed method results in more noise reduction and resolution enhancement in expense of contrast degrading. For the simulation, two-point target, along with lateral variation in two depths of imaging are employed and it is evaluated under high level of noise in imaging medium. Proposed algorithm in compare to DMAS, results in reduction of lateral valley for about 19 dB followed by more distinguished two-point target. Moreover, levels of sidelobe are reduced for about 25 dB.

[95]  arXiv:1801.06077 (cross-list from q-fin.CP) [pdf, other]
Title: The QLBS Q-Learner Goes NuQLear: Fitted Q Iteration, Inverse RL, and Option Portfolios
Authors: Igor Halperin
Subjects: Computational Finance (q-fin.CP); Learning (cs.LG)

The QLBS model is a discrete-time option hedging and pricing model that is based on Dynamic Programming (DP) and Reinforcement Learning (RL). It combines the famous Q-Learning method for RL with the Black-Scholes (-Merton) model's idea of reducing the problem of option pricing and hedging to the problem of optimal rebalancing of a dynamic replicating portfolio for the option, which is made of a stock and cash. Here we expand on several NuQLear (Numerical Q-Learning) topics with the QLBS model. First, we investigate the performance of Fitted Q Iteration for a RL (data-driven) solution to the model, and benchmark it versus a DP (model-based) solution, as well as versus the BSM model. Second, we develop an Inverse Reinforcement Learning (IRL) setting for the model, where we only observe prices and actions (re-hedges) taken by a trader, but not rewards. Third, we outline how the QLBS model can be used for pricing portfolios of options, rather than a single option in isolation, thus providing its own, data-driven and model independent solution to the (in)famous volatility smile problem of the Black-Scholes model.

[96]  arXiv:1801.06159 (cross-list from stat.ML) [pdf, other]
Title: When Does Stochastic Gradient Algorithm Work Well?
Subjects: Machine Learning (stat.ML); Learning (cs.LG); Optimization and Control (math.OC)

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, large step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhood of the optimal solutions). We then empirically demonstrate that these assumptions hold for logistic regression and standard deep neural networks on classical data sets. Thus our analysis helps to explain when efficient behavior can be expected from the SGD method in training classification models and deep neural networks.

### Replacements for Fri, 19 Jan 18

[97]  arXiv:1505.04252 (replaced) [pdf, ps, other]
Title: Global Convergence of Unmodified 3-Block ADMM for a Class of Convex Minimization Problems
Subjects: Optimization and Control (math.OC); Learning (cs.LG); Machine Learning (stat.ML)
[98]  arXiv:1512.05403 (replaced) [pdf, other]
Title: Discontinuous Galerkin Deterministic Solvers for a Boltzmann-Poisson Model of Hot Electron Transport by Averaged Empirical Pseudopotential Band Structures
Comments: submission to CMAME (Computer Methods in Applied Mechanics and Engineering) Journal as a reply to the reviewers on February 2017
Journal-ref: Computer Methods in Applied Mechanics and Engineering, Volume 321, 2017, Pages 209-234
Subjects: Computational Engineering, Finance, and Science (cs.CE); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Numerical Analysis (math.NA)
[99]  arXiv:1512.05648 (replaced) [pdf, ps, other]
Title: Curves in $\mathbb{R}^4$ and two-rich points
Comments: 20 pages, 0 figures. v2: fixed an error in Section 5.2
Journal-ref: Discrete. Comput. Geom. 58: 232--253, 2017
Subjects: Combinatorics (math.CO); Computational Geometry (cs.CG)
[100]  arXiv:1603.01486 (replaced) [pdf, other]
Title: Distributed $(Δ+1)$-Coloring in Sublogarithmic Rounds
Subjects: Data Structures and Algorithms (cs.DS); Distributed, Parallel, and Cluster Computing (cs.DC)
[101]  arXiv:1605.01021 (replaced) [pdf, ps, other]
Title: An Information Theoretic Framework For Designing Information Elicitation Mechanisms That Reward Truth-telling
Subjects: Computer Science and Game Theory (cs.GT)
[102]  arXiv:1605.02408 (replaced) [pdf, ps, other]
Title: Structured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis
Subjects: Optimization and Control (math.OC); Learning (cs.LG); Machine Learning (stat.ML)
[103]  arXiv:1605.08288 (replaced) [pdf, other]
Title: A counterexample to Thiagarajan's conjecture on regular event structures
Subjects: Formal Languages and Automata Theory (cs.FL); Discrete Mathematics (cs.DM); Combinatorics (math.CO)
[104]  arXiv:1606.01396 (replaced) [pdf, ps, other]
Title: Implicit Deflation for Univariate Polynomial Root-finding
Subjects: Numerical Analysis (math.NA); Numerical Analysis (cs.NA)
[105]  arXiv:1608.02389 (replaced) [pdf, other]
Title: On $H$-Topological Intersection Graphs
Subjects: Discrete Mathematics (cs.DM); Combinatorics (math.CO)
[106]  arXiv:1610.03052 (replaced) [pdf, other]
Title: Verification of the Tree-Based Hierarchical Read-Copy Update in the Linux Kernel
Subjects: Logic in Computer Science (cs.LO); Distributed, Parallel, and Cluster Computing (cs.DC); Operating Systems (cs.OS); Software Engineering (cs.SE)
[107]  arXiv:1611.01364 (replaced) [pdf, ps, other]
Title: A Generalization of the Minisum and Minimax Voting Methods
Subjects: Computer Science and Game Theory (cs.GT); Data Structures and Algorithms (cs.DS)
[108]  arXiv:1612.08070 (replaced) [pdf, other]
Title: Spectral norm and quantum speed-up
Comments: 17 pages, 1 figure, typos corrected, changed title
Subjects: Quantum Physics (quant-ph); Computational Complexity (cs.CC)
[109]  arXiv:1702.01325 (replaced) [pdf, other]
Title: Combining and Steganography of 3D Face Textures
Comments: 6 pages, 10 figures, 16 equations, 5 sections
Journal-ref: Volume 5, Issue 2 - Issue Serial Number 10, Autumn 2017, Page 1-1
Subjects: Multimedia (cs.MM)
[110]  arXiv:1702.07958 (replaced) [pdf, other]
Title: Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$ Regret
Comments: 22 pages, 2 figures; ICML 2017; this version includes additional discussions of Newtron, and a variant of SOBA that directly uses an online exp-concave optimization oracle
Subjects: Learning (cs.LG); Machine Learning (stat.ML)
[111]  arXiv:1703.10446 (replaced) [pdf, other]
Title: Gelly-Scheduling: Distributed Graph Processing for Network Service Placement in Community Networks
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Data Structures and Algorithms (cs.DS)
[112]  arXiv:1704.00605 (replaced) [pdf, other]
Title: Faster Base64 Encoding and Decoding Using AVX2 Instructions
Comments: software at this https URL
Subjects: Mathematical Software (cs.MS)
[113]  arXiv:1704.08443 (replaced) [pdf, other]
Title: DNA Steganalysis Using Deep Recurrent Neural Networks
Subjects: Learning (cs.LG); Multimedia (cs.MM)
[114]  arXiv:1706.03282 (replaced) [pdf, other]
Title: Segmentation of nearly isotropic overlapped tracks in photomicrographs using successive erosions as watershed markers
Subjects: Computer Vision and Pattern Recognition (cs.CV); Instrumentation and Detectors (physics.ins-det)
[115]  arXiv:1706.04034 (replaced) [pdf, other]
Title: Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty
Comments: Major update: more results, depth filter released as opensource, 34 pages
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
[116]  arXiv:1707.00810 (replaced) [pdf, other]
Title: Rényi Resolvability and Its Applications to the Wiretap Channel
Comments: 49 pages. An error in the exponent in Theorem 6 has been fixed
Subjects: Information Theory (cs.IT); Cryptography and Security (cs.CR)
[117]  arXiv:1707.03794 (replaced) [pdf, ps, other]
Title: Coupling Load-Following Control with OPF
Subjects: Optimization and Control (math.OC); Systems and Control (cs.SY)
[118]  arXiv:1709.02897 (replaced) [pdf]
Title: Analysing Scientific Collaborations of New Zealand Institutions using Scopus Bibliometric Data
Comments: 10 pages, 15 figures, accepted author copy with link to research data, Analysing Scientific Collaborations of New Zealand Institutions using Scopus Bibliometric Data. In Proceedings of ACSW 2018: Australasian Computer Science Week 2018, January 29-February 2, 2018, Brisbane, QLD, Australia
Subjects: Digital Libraries (cs.DL); Social and Information Networks (cs.SI)
[119]  arXiv:1709.08122 (replaced) [pdf, other]
Title: A Simple Algorithm for Computing a Cycle Separator
Subjects: Computational Geometry (cs.CG); Discrete Mathematics (cs.DM)
[120]  arXiv:1710.01218 (replaced) [pdf, ps, other]
Title: Reducing Complexity of HEVC: A Deep Learning Approach
Comments: 17 pages, with 12 figures and 8 tables
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[121]  arXiv:1710.02030 (replaced) [pdf, other]
Title: McDiarmid Drift Detection Methods for Evolving Data Streams
Comments: 9 pages, 3 figures, 3 tables
Subjects: Machine Learning (stat.ML); Databases (cs.DB); Learning (cs.LG)
[122]  arXiv:1711.04126 (replaced) [pdf, other]
Title: Disease Prediction from Electronic Health Records Using Generative Adversarial Networks
Subjects: Learning (cs.LG); Machine Learning (stat.ML)
[123]  arXiv:1711.05788 (replaced) [pdf, other]
Title: Quantile Markov Decision Process
Subjects: Artificial Intelligence (cs.AI)
[124]  arXiv:1711.05825 (replaced) [pdf, other]
Title: Bootstrapped synthetic likelihood
Subjects: Computation (stat.CO); Artificial Intelligence (cs.AI); Data Analysis, Statistics and Probability (physics.data-an); Methodology (stat.ME); Machine Learning (stat.ML)
[125]  arXiv:1711.07291 (replaced) [pdf]
Title: Do bibliometrics and altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime, altmetrics, and citation data
Subjects: Digital Libraries (cs.DL)
[126]  arXiv:1711.07474 (replaced) [pdf]
Title: XSAT of Linear CNF Formulas
Authors: Bernd. R. Schuh
Subjects: Computational Complexity (cs.CC); Logic (math.LO)
[127]  arXiv:1712.01769 (replaced) [pdf, other]
Title: State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Subjects: Computation and Language (cs.CL); Sound (cs.SD); Audio and Speech Processing (eess.AS); Machine Learning (stat.ML)
[128]  arXiv:1712.02225 (replaced) [pdf, other]
Title: Pose-Normalized Image Generation for Person Re-identification
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM); Machine Learning (stat.ML)
[129]  arXiv:1801.00746 (replaced) [pdf, other]
Title: Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Network Compiler
Subjects: Neural and Evolutionary Computing (cs.NE); Emerging Technologies (cs.ET); Learning (cs.LG)
[130]  arXiv:1801.02745 (replaced) [pdf, other]
Title: Improved Capacity Upper Bounds for the Discrete-Time Poisson Channel
Comments: 11 pages, 3 figures. Fixed some typos in Section 4.2 and added more detailed argument in Section 4.4
Subjects: Information Theory (cs.IT)
[131]  arXiv:1801.03327 (replaced) [pdf, other]
Title: Priority Attachment: a Universal Mechanism for Generating Networks
Subjects: Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)
[132]  arXiv:1801.03481 (replaced) [pdf, ps, other]
Title: Latent Factor Analysis of Gaussian Distributions under Graphical Constraints
Subjects: Information Theory (cs.IT)
[133]  arXiv:1801.04299 (replaced) [pdf, other]
Title: Belief Propagation Decoding of Polar Codes on Permuted Factor Graphs
Comments: in IEEE Wireless Commun. and Networking Conf. (WCNC), April 2018
Subjects: Information Theory (cs.IT)
[134]  arXiv:1801.04310 (replaced) [pdf, ps, other]
Title: A Multi-Hop Framework for Multi-Source, Multi-Relay, All-Cast Channels
Comments: In support of a submission to ISIT 2018
Subjects: Information Theory (cs.IT)
[135]  arXiv:1801.04420 (replaced) [pdf, ps, other]
Title: LDPC Codes over Gaussian Multiple Access Wiretap Channel
Comments: 21 pages, 8 figures, A Revision Submitted to IET Communications
Subjects: Information Theory (cs.IT)
[136]  arXiv:1801.04774 (replaced) [pdf, other]
Title: DNA Molecular Storage System: Transferring Digitally Encoded Information through Bacterial Nanonetworks
Comments: 22 pages, 13 figures; removed wrong venue references, reordered bibliography accordingly to ACM guidelines
Subjects: Emerging Technologies (cs.ET)
[137]  arXiv:1801.05242 (replaced) [pdf, other]
Title: A Bayesian Conjugate Gradient Method
Subjects: Methodology (stat.ME); Numerical Analysis (cs.NA); Numerical Analysis (math.NA); Statistics Theory (math.ST)
[138]  arXiv:1801.05313 (replaced) [pdf, other]
Title: Digital identity, personal data and privacy protection
Subjects: Computers and Society (cs.CY)
[139]  arXiv:1801.05599 (replaced) [pdf, other]
Title: Additive Margin Softmax for Face Verification