Apresentação em tema: "ISPA 2008 APDCT Workshop1 Reinforcement Learning applied to Meta- scheduling in grid environments Bernardo Costa Inês Dutra Marta Mattoso."— Transcrição da apresentação:
ISPA 2008 APDCT Workshop1 Reinforcement Learning applied to Meta- scheduling in grid environments Bernardo Costa Inês Dutra Marta Mattoso
ISPA 2008 APDCT Workshop2 Outline Introduction Algorithms Experiments Conclusions and Future work
ISPA 2008 APDCT Workshop3 Introduction Algorithms Experiments Conclusions and Future work
ISPA 2008 APDCT Workshop4 Introduction Relevance: Available grid schedulers usually do not employ a strategy that may benefit a single or multiple users. Some strategies employ performance information dependent algorithms (pida). Most works are simulated. Difficulty: monitoring information not reliable due to network latency.
ISPA 2008 APDCT Workshop5 Introduction Algorithms Experiments Conclusions and Future work
ISPA 2008 APDCT Workshop6 Study of 2 Algorithms (AG) A. Galstyan, K. Czajkowski, and K. Lerman. Resource allocation in the grid using reinforcement learning. In AAMAS, pages 1314– 1315. IEEE, 2004. (MQD) Y. C. Lee and A. Y. Zomaya. A grid scheduling algorithm for bag-of-tasks applications using multiple queues with duplication. 5th IEEE/ACIS International Conference on Computer and Information Science and 1st IEEE/ACIS International Workshop on Component-Based Software Engineering, Software Architecture and Reuse. ICIS-COMSAR, pages 5–10, 2006.
ISPA 2008 APDCT Workshop7 What is reinforcement learning? Machine learning technique used to learn behaviours given a series of temporal events. Non-supervised learning. Based on the idea of rewards and punishments.
ISPA 2008 APDCT Workshop8 Algorithms AG and MQD use reinforcement learning to associate an efficiency rank to an RMS. Reinforcement learning native to AG. MQD was modified to use this technique to estimate computational power of an RMS. AG allocates RMS in a greedy and probabilistic way. MQD allocates RMS associatively and deterministically.
ISPA 2008 APDCT Workshop9 Algorithms Calculating efficiency: Reward is assigned to RMS that has performance better than average. Reward can be negative (punishment). RMS may not change its efficiency value.
ISPA 2008 APDCT Workshop10 Algorithms Calculating efficiency: parameters: and l is the importance of the time spent executing a task affects rewarding. l is a learning parameter
ISPA 2008 APDCT Workshop11 Algorithms AG: With high prob, associates job to the best available RMS, otherwise, selects randomly. MQD: Groups of jobs sorted according execution time are associated to an RMS. Most efficient executes the heaviest jobs. Initial allocation to estimate RMS´ efficiency
25 Introduction Algorithms Experiments Conclusions and Future work
ISPA 2008 APDCT Workshop26 Experiments GridbusBroker: No need to install it in other grid sites Only requirement: ssh access to a grid node Round-robin scheduler (RR) Limitations: Does not support job duplication Imposes a limit on the number of active jobs per RMS
ISPA 2008 APDCT Workshop28 Experiments Objective: study performance of algorithms in a real grid environment. Application: bag-of-tasks. CPU intensive. Duration between 3 and 8 minutes.
ISPA 2008 APDCT Workshop29 Experiments Evaluation criteria: makespan. Makespan was normalized with respect to RR
ISPA 2008 APDCT Workshop30 Experiments Phase I: Tuning of parameters and l 500 jobs. Phase II: Performance of re-scheduling. Later load increased to 1000 jobs.
ISPA 2008 APDCT Workshop31 Experiments One experiment is a run of consecutive executions of RR, AG and MQD. A scenario is a set of experiments with fixed parameters. For each scenario: 15 runs. T-tests to verify statistical difference beteween AG/MQD e RR, with 95% confidence (the results have a normal distribution).
ISPA 2008 APDCT Workshop35 Introduction Algorithms Experiments Conclusions and Future work
ISPA 2008 APDCT Workshop36 Conclusions and Future work Results showed that was possible to achieve optimizations with both AG and MQD wrt RR Experiments validate MQD simulation results found in the literature. Reinforcement learning is a promising technique to classify resources in real grid environments.
ISPA 2008 APDCT Workshop37 Conclusions and Future work Study the behavior of AG and MQD with other kinds of applications, e.g., data intensive, with dependencies.
ISPA 2008 APDCT Workshop40 Definições Gerenciador de recursos: sistema que gerencia a submissão e execução de jobs dentro de um domínio específico. Resource Management System (RMS): sinônimo para gerenciador de recursos. Batch job scheduler: escalonador típico de um RMS. Ex: SGE, PBS/Torque.
ISPA 2008 APDCT Workshop41 Definições Meta-escalonador: um escalonador que não tem acesso direto aos recursos, mas apenas aos RMS que os gerenciam. Aprendizado por reforço: técnica que induz um agente a tomar decisões por meio de recompensas oferecidas. Makespan: tempo total gasto por um meta- escalonador para finalizar a execução de um conjunto de jobs a ele designado.
ISPA 2008 APDCT Workshop42 Definições Job: aplicativo submetido ao grid por um usuário, executado em geral por um RMS. Exemplos de tipos de jobs: Bag-of-Tasks: jobs que não possuem relação de dependência ou precedência explícita entre si. Troca de parâmetros (APST): jobs de um mesmo executável que diferenciam-se por um valor de entrada que varia entre as execuções.
Your consent to our cookies if you continue to use this website.