A cooperative bin packing game is a N-person game, where the player set N consists of k bins of capacity 1 each and n items of sizes a1,...,an. The value of a coalition of players is defined to be the maximum total size of items in the coalition that can be packed into the bins of the coalition. We present an alternative proof to the non-emptiness of 1/3-core for all bin packing games and show how to improve this bound 1/3 (slightly).
Due to increasing health care costs, hospitals are forced to reduce the number of beds at the wards. This can be achieved by reducing the length of stay of patients, or by adjusting the admission, and therefore, the operating room (OR) schedule. In this presentation, we focus on optimizing the OR schedule such that the number of required beds at the wards is minimized. The first step of our solution approach is to generate a specific number of surgery blocks. The blocks are generated by a column generation approach that maximizes the OR utilization and satisfy demand, surgeon, and instrument constraints. In addition, the probability on overtime is restricted. The second step of our solution approach is a simulated annealing procedure which assigns each block to an OR and day such that the maximum number of required beds is minimized. The solution approach is tested on data from the HagaZiekenhuis and the results show that OR utilization can be improved and the number of required beds can be decreased. Consequently, this will decrease costs, increase quality of care and level the workload on wards.
The basic idea of Statistical Model Checking is to repeatedly simulate the behaviour of a real-world system in order to say something about the probability that some performance property is satisfied. When the system model is huge, a single simulation run can take hours. Accordingly, it is vital to be able to terminate as soon as possible. We show how currently used techniques can be compared to a random walk on the real line. We then discuss the shortcomings of these methods and how these can be alleviated.
Radar systems are used widely for estimating the position, kinematic properties and other characteristics of both stationary and moving objects (also called targets).
Several radar parameters, such as the emitting power, direction of emission and the waveform characteristics, can be selected online for improved performance according to the scenario under consideration.
We will demonstrate how sensor management can be used for selecting the optimal radar parameters and therefore, improve the performance of the two main radar functions, meaning target tracking and search for undetected targets.
The aforementioned research is being carried out in the MC IMPULSE project https://mcimpulse.isy.liu.se
Radar systems are used widely for detecting and tracking stationary or moving objects (also called targets). The classical approach to detect and track a target proceeds in two phases: a first detection step consist in pre-processing the raw radar signal to keep only detection “plots”. A second tracking step aims to estimate the actual state of the target from these detection “plots”. In the first step a threshold decision is already made and obviously results in a loss of information. To overcome this problem, the Track before Detect (TBD) approach proposes to base the tracking on the raw measurements instead of plots.
First, a model-based integrated detection & tracking extended to include ambiguities and eclipsing effects in range and Doppler will be detailed. Then, it will be applied by means of a particle filter. The proposed particle filter succeeds in resolving range and Doppler ambiguities and detecting and tracking multiple targets in a TBD context.
A road pricing game is a game where various stakeholders and/or regions with different (and usually conflicting) objectives compete for toll setting in a given transportation network in other to satisfy their individual objectives. We investigate some classical game theoretical solution concepts for the road pricing game. We establish results for the road pricing game so that stakeholders and/or regions playing such game will beforehand know what is obtainable. This will save time and argument, and above all, get rid of the feelings of unfairness among the competing actors and road users. In particular, we show that no pure Nash equilibrium exists among the actors, and further illustrate that even “mixed Nash equilibrium” may not be achievable in the road pricing game. The paper also demonstrates the type of coalitions that are not only reachable, but also stable and profitable for the actors involved.
We consider the invariant measure of a continuous-time Markov process in the quarter-plane. The basic solutions of the global balance equation are the geometric distributions. We first show that a finite linear combination of basic geometric distribution cannot be invariant measure unless it consists of a single basic geometric distribution. Second, we show that a countable linear combination of geometric terms can be an invariant measure only if it consists of pairwise coupled terms. As a consequence, we have obtained a complete characterization of all countable linear combinations of geometric product forms that may yield an invariant measure for a homogeneous continuous-time Markov process in the quarter-plane.
In this talk, we introduce the notion of the deconvolution, which is known in convex analysis, as an operation in risk analysis. Based on the axiomatic approach of Artzner, Delbaen, Eber, and Heath in 1999 Föllmer and Schied derived a dual representation of convex risk measures in 2002. We characterize the difference of two convex risk measures on L^p-spaces and give sufficient conditions for this difference to be a convex risk measure as well. We derive its dual representation. Similarly, we describe the sup-convolution of two convex risk and give its dual representation. The connection between deconvolution and difference is that the penalty function of the deconvolution of two convex risk measures is the difference of the penalty functions of these two risk measures and vice versa.
Because of our heavy reliance on convex analysis, in particular on the deconvolution and the difference of convex functions, we dedicate a the first part of the talk to this field. This enables us to use the elegant dual theory for convex risk measures.
This presentation is about the design of an L^2 optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. The resulting optimal relaxed causal sampler is cascade of a generalized sampler and a discrete system.