license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. layout: publication title: Publication subtitle: > The Power of Nested Parallelism in Big Data Processing -- Hitting Three Flies with One Slap link-name: SIGMOD 2021 img-thumb: assets/img/screenshot/rheem.png authors: Gábor E. Gévay, Jorge-Arnulfo Quiané-Ruiz and Volker Markl. year: 2021 month: 06 day: 20 link-paper: https://www.researchgate.net/publication/350021175_The_Power_of_Nested_Parallelism_in_Big_Data_Processing_--_Hitting_Three_Flies_with_One_Slap link-external: true

We propose RL-Cargo, a revenue management approach for air-cargo that combines machine learning prediction with decision-making using deep reinforcement learning. This approach addresses a problem that is unique to the air-cargo business, namely the wide discrepancy between the quantity (weight or volume) that a shipper will book and the actual amount received at departure time by the airline. The discrepancy results in sub-optimal and inefficient behavior by both the shipper and the airline resulting in an overall loss of potential revenue for the airline. A DQN method using uncertainty bounds from prediction is proposed for decision making under a prescriptive learning framework. Parts of RL-Cargo have been deployed in the production environment of a large commercial airline company. We have validated the benefits of RL-Cargo using a real dataset. More specifically, we have carried out simulations seeded with real data to compare classical Dynamic Programming and Deep Reinforcement Learning techniques on offloading costs and revenue generation. Our results suggest that prescriptive learning which combines prediction with decision-making provides a principled approach for managing the air cargo revenue ecosystem. Furthermore, the proposed approach can be abstracted to many other application domains where decision making needs to be carried out in face of both data and behavioral uncertainty.