El barómetro es un instrumento de medición atmosérica, específicamente utilizado en la determinación de la fuerza por unidad de superficie ejercida por el peso de la atmósfera. Existe un gran número de equipos atmoféricos con distintos tipos de estos aparátos y son diariamente utilizados ya que la presión atmosférica juega un papel importante en la determinación y pronóstico del tiempo así como en el área de investigación al momento de realizarse experimentos ya que pueden llegar a afectar o hacer variar el funcionamiento de muchos aparatos electrónicos y mecánicos.
This research paper aims at exploiting efficient ways of implementing the N-Body problem. The N-Body problem, in the field of physics, predicts the movements and planets and their gravitational interactions. In this paper, the efficient execution of heavy computational work through usage of different cores in CPU and GPU is looked into; achieved by integrating the OpenMP parallelization API and the Nvidia CUDA into the code. The paper also aims at performance analysis of various algorithms used to solve the same problem. This research not only aids as an alternative to complex simulations but also for bigger data that requires work distribution and computationally expensive procedures.
In this document we focus on modifying the Linux Kernel through memory and scheduler parameters. The main objective is to study the performance of a computer during the execution of AIO-Stress Benchmark. It was necessary to run the test several times since three of the parameter mentioned in this project were modified 5 times. After completing the test, the results were displayed on graphs, showing that all the variables have a noticeable influence on the performance of the computer.
Las bases de datos relacionales han sido las herramientas por excelencia para el almacenamiento de la información en los sistemas informáticos. No obstante, las bases de datos NoSQL, como tendencia, han venido ganando espacio especialmente por la escalabilidad y velocidad en sus tiempos de respuestas. PostgreSQL ha incorporado algunas características de tipo NoSQL, como el almacenamiento efímero y el manejo de datos JSON; características que pueden aprovecharse para realizar acciones desde el gestor dándole mayor potencia. El objetivo de este artículo es evaluar, mediante toda la documentación encontrada, el comportamiento de las características NoSQL de PostgreSQL frente a un gestor NoSQL, comparandola con MongoDB, respecto a los tiempos de respuestas y dar a conocer las ventajas de uno con respecto al otro.
Palabras Claves: Características NoSQL en PostgreSQL, MongoDB, PostgreSQL
Recent work on information extraction has suggested that fast, interactive tools can be highly effective; however, creating a usable system is challenging, and few publically available tools exist. In this paper we present IKE, a new extraction tool that performs fast, interactive bootstrapping to develop high-quality extraction patterns for targeted relations, and provides novel solutions to these usability concerns. In particular, it uses a novel query language that is expressive, easy to understand, and fast to execute - essential requirements for a practical system - and is the first interactive extraction tool to seamlessly integrate symbolic and distributional methods for search. An initial evaluation suggests that relation tables can be populated substantially faster than by manual pattern authoring or using fully automated tools, while retaining accuracy, an important step towards practical knowledge-base construction.
We explore ways of allowing for the offloading of computationally rigorous tasks from devices with slow logical processors onto a network of anonymous peer-processors. Recent advances in secret sharing schemes, decentralized consensus mechanisms, and multiparty computation (MPC) protocols are combined to create a P2P MPC market. Unlike other computational "clouds", ours is able to generically compute any arithmetic circuit, providing a viable platform for processing on the semantic web. Finally, we show that such a system works in a hostile environment, that it scales well, and that it adapts very easily to any future advances in the complexity theoretic cryptography used. Specifically, we show that the feasibility of our system can only improve, and is historically guaranteed to do so.
Due to the widespread adoption of the internet and its services, protocols have been established and new ones arise every year. Either for securing connections, ciphering information or service authentication, protocols place an important part in network communications. The TCP/IP has become one of the ubiquitous protocol suites for secure communication, and therefore, a desirable target for covert information encapsulation. In this article, we will discuss the art of unauthorized data transfer - covert techniques - for data encapsulation in protocol data packets, emphasizing headers fields manipulation.