name
stringlengths 7
10
| title
stringlengths 13
125
| abstract
stringlengths 67
3.02k
| fulltext
stringclasses 1
value | keywords
stringlengths 17
734
|
---|---|---|---|---|
train_1183 | Evolving robust asynchronous cellular automata for the density task | In this paper the evolution of three kinds of asynchronous cellular automata are studied for the density task. Results are compared with those obtained for synchronous automata and the influence of various asynchronous update policies on the computational strategy is described. How synchronous and asynchronous cellular automata behave is investigated when the update policy is gradually changed, showing that asynchronous cellular automata are more adaptable. The behavior of synchronous and asynchronous evolved automata are studied under the presence of random noise of two kinds and it is shown that asynchronous cellular automata implicitly offer superior fault tolerance | asynchronous cellular automata;synchronous automata;discrete dynamical systems;random noise;cellular automata;fault tolerance |
|
train_1184 | Measuring return: revealing ROI | The most critical part of the return-on-investment odyssey is to develop metrics that matter to the business and to measure systems in terms of their ability to help achieve those business goals. Everything must flow from those key metrics. And don't forget to revisit those every now and then, too. Since all systems wind down over time, it's important to keep tabs on how well your automation investment is meeting the metrics established by your company. Manufacturers are clamoring for a tool to help quantify returns and analyze the results | key metrics;roi;automation investment;technology purchases;return-on-investment |
|
train_1185 | Trading exchanges: online marketplaces evolve | Looks at how trading exchanges are evolving rapidly to help manufacturers keep up with customer demand | supply chain management;manufacturers;online marketplaces;customer demand;xml standards;enterprise platforms;core software platform;private exchanges;integration technology;middleware;enterprise resource planning;trading exchanges;content management capabilities |
|
train_1186 | Implementing: it's all about processes | Looks at how the key to successful technology deployment can be found in a set of four basic disciplines | implementation;manufacturers;third-party integration;vendor-supplied hardware integration services;technology deployment;incremental targets;vendor-supplied software integration services |
|
train_1187 | Ethernet networks: getting down to business | While it seems pretty clear that Ethernet has won the battle for the mindshare as the network of choice for the factory floor, there's still a war to be won in implementation as cutting-edge manufacturers begin to adopt the technology on a widespread basis | ethernet;supervisory level;cutting-edge manufacturers;factory floor |
|
train_1188 | It's time to buy | There is an upside to a down economy: over-zealous suppliers are willing to make deals that were unthinkable a few years ago. That's because vendors are experiencing the same money squeeze as manufacturers, which makes the year 2002 the perfect time to invest in new technology. The author states that when negotiating the deal, provisions for unexpected costs, an exit strategy, and even shared risk with the vendor should be on the table | exit strategy;money squeeze;bargaining power;vendor;buyers market;shared risk;negotiation;suppliers;unexpected costs |
|
train_1189 | CRM: approaching zenith | Looks at how manufacturers are starting to warm up to the concept of customer relationship management. CRM has matured into what is expected to be big business. As CRM software evolves to its second, some say third, generation, it's likely to be more valuable to holdouts in manufacturing and other sectors | manufacturers;customer relationship management;manufacturing;crm |
|
train_119 | JPEG2000: standard for interactive imaging | JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet | image compression;client-server systems;international standards organization;joint photographic experts group;interoperable compression;review;interactive imaging;scalable compression;jpeg2000 |
|
train_1190 | Buying into the relationship [business software] | Choosing the right software to improve business processes can have a huge impact on a company's efficiency and profitability. While it is sometimes hard to get beyond vendor hype about software features and functionality and know what to realistically expect, it is even more difficult to determine if the vendor is the right vendor to partner with. Thus picking the right software is important, but companies have to realize that what they are really buying into is a relationship with the vendor | business software;software evaluation;management;functionality;vendor relationship |
|
train_1191 | On the monotonicity conservation in numerical solutions of the heat equation | It is important to choose such numerical methods in practice that mirror the characteristic properties of the described process beyond the stability and convergence. The investigated qualitative property in this paper is the conservation of the monotonicity in space of the initial heat distribution. We prove some statements about the monotonicity conservation and total monotonicity of one-step vector-iterations. Then, applying these results, we consider the numerical solutions of the one-dimensional heat equation. Our main theorem formulates the necessary and sufficient condition of the uniform monotonicity conservation. The sharpness of the conditions is demonstrated by numerical examples | monotonicity conservation;characteristic properties;qualitative property;necessary and sufficient condition;heat equation;one-step vector-iterations;numerical solutions |
|
train_1192 | Construction of two-sided bounds for initial-boundary value problems | This paper extends the bounding operator approach developed for boundary value problems to the case of initial-boundary value problems (IBVPs). Following the general principle of bounding operators enclosing methods for the case of partial differential equations are discussed. In particular, continuous discretization methods with an appropriate error bound controlled shift and monotone extensions of Rothe's method for parabolic problems are investigated | partial differential equations;two-sided bounds;bounding operators;parabolic problems;bounding operator approach;initial-boundary value problems |
|
train_1193 | Operator splitting and approximate factorization for taxis-diffusion-reaction | models In this paper we consider the numerical solution of 2D systems of certain types of taxis-diffusion-reaction equations from mathematical biology. By spatial discretization these PDE systems are approximated by systems of positive, nonlinear ODEs (Method of Lines). The aim of this paper is to examine the numerical integration of these ODE systems for low to moderate accuracy by means of splitting techniques. An important consideration is maintenance of positivity. We apply operator splitting and approximate matrix factorization using low order explicit Runge-Kutta methods and linearly implicit Runge-Kutta-Rosenbrock methods. As a reference method the general purpose solver VODPK is applied | numerical integration;mathematical biology;approximate matrix factorization;taxis-diffusion-reaction models;spatial discretization;runge-kutta methods;numerical solution;nonlinear odes;approximate factorization;pde systems;operator splitting;linearly implicit runge-kutta-rosenbrock methods |
|
train_1194 | New methods for oscillatory problems based on classical codes | The numerical integration of differential equations with oscillatory solutions is a very common problem in many fields of the applied sciences. Some methods have been specially devised for this kind of problem. In most of them, the calculation of the coefficients needs more computational effort than the classical codes because such coefficients depend on the step-size in a not simple manner. On the contrary, in this work we present new algorithms specially designed for perturbed oscillators whose coefficients have a simple dependence on the step-size. The methods obtained are competitive when comparing with classical and special codes | numerical integration;oscillatory problems;oscillatory solutions;perturbed oscillators;classical codes;differential equations |
|
train_1195 | Sharpening the estimate of the stability constant in the maximum-norm of the | Crank-Nicolson scheme for the one-dimensional heat equation This paper is concerned with the stability constant C/sub infinity / in the maximum-norm of the Crank-Nicolson scheme applied. to the one-dimensional heat equation. A well known result due to S.J. Serdyukova is that C/sub infinity / < 23. In the present paper, by using a sharp resolvent estimate for the discrete Laplacian together with the Cauchy formula, it is shown that 3 <or= C/sub infinity / < 4.325. This bound also holds when the heat equation is considered on a bounded interval along with Dirichlet or Neumann boundary conditions | one-dimensional heat equation;cauchy formula;discrete laplacian;dirichlet boundary conditions;stability constant;sharp resolvent estimate;crank-nicolson scheme;neumann boundary conditions |
|
train_1196 | Multiple shooting using a dichotomically stable integrator for solving | differential-algebraic equations In previous work by the first author, it has been established that a dichotomically stable discretization is needed when solving a stiff boundary-value problem in ordinary differential equations (ODEs), when sharp boundary layers may occur at each end of the interval. A dichotomically stable implicit Runge-Kutta method, using the 3-stage, fourth-order, Lobatto IIIA formulae, has been implemented in a variable step-size initial-value integrator, which could be used in a multiple-shooting approach. In the case of index-one differential-algebraic equations (DAEs) the use of the Lobatto IIIA formulae has an advantage, over a comparable Gaussian method, that the order is the same for both differential and algebraic variables, and there is no need to treat them separately. The ODE integrator has been adapted for the solution of index-one DAEs, and the resulting integrator (SYMDAE) has been inserted into the multiple-shooting code (MSHDAE) previously developed by R. Lamour for differential-algebraic boundary-value problems. The standard version of MSHDAE uses a BDF integrator, which is not dichotomically stable, and for some stiff test problems this fails to integrate across the interval of interest, while the dichotomically stable integrator SYMDAE encounters no difficulty. Indeed, for such problems, the modified version of MSHDAE produces an accurate solution, and within limits imposed by computer word length, the efficiency of the solution process improves with increasing stiffness. For some nonstiff problems, the solution is also entirely satisfactory | implicit runge-kutta method;stiff boundary-value problem;ordinary differential equations;lobatto iiia formulae;initial-value integrator;differential-algebraic equations;dichotomically stable integrator;multiple shooting |
|
train_1197 | Numerical behaviour of stable and unstable solitary waves | In this paper we analyse the behaviour in time of the numerical approximations to solitary wave solutions of the generalized Benjamin-Bona-Mahony equation. This equation possesses an important property: the stability of these solutions depends on their velocity. We identify the error propagation mechanisms in both the stable and unstable case. In particular, we show that in the stable case, numerical methods that preserve some conserved quantities of the problem are more appropriate for the simulation of this kind of solutions | numerical methods;stable solitary waves;numerical approximations;numerical behaviour;unstable solitary waves;error propagation mechanisms;generalized benjamin-bona-mahony equation |
|
train_1198 | Post-projected Runge-Kutta methods for index-2 differential-algebraic equations | A new projection technique for Runge-Kutta methods applied to index-2 differential-algebraic equations is presented in which the numerical approximation is projected only as part of the output process. It is shown that for methods that are strictly stable at infinity, the order of convergence is unaffected compared to standard projected methods. Gauss methods, for which this technique is of special interest when some symmetry is to be preserved, are studied in more detail | post-projected runge-kutta methods;index-2 differential-algebraic equations;order of convergence;projected methods;numerical approximation |
|
train_1199 | Quasi stage order conditions for SDIRK methods | The stage order condition is a simplifying assumption that reduces the number of order conditions to be fulfilled when designing a Runge-Kutta (RK) method. Because a DIRK (diagonally implicit RK) method cannot have stage order greater than 1, we introduce quasi stage order conditions and derive some of their properties for DIRKs. We use these conditions to derive a low-order DIRK method with embedded error estimator. Numerical tests with stiff ODEs and DAEs of index 1 and 2 indicate that the method is competitive with other RK methods for low accuracy tolerances | quasi stage order conditions;sdirk methods;differential-algebraic systems;numerical tests;embedded error estimator;diagonally implicit runge-kutta method |
|
train_12 | National learning systems: a new approach on technological change in late | industrializing economies and evidences from the cases of Brazil and South Korea The paper has two intertwined parts. The first one is a proposal for a conceptual and theoretical framework to understand technical change in late industrializing economies. The second part develops a kind of empirical test of the usefulness of that new framework by means of a comparative study of the Brazilian and South Korean cases. All the four types of macroevidences of the technical change processes of Brazil and Korea corroborate, directly or indirectly, the hypothesis of the existence of actual cases of national learning systems (NLSs) of passive and active nature, as it is shown to be the cases of Brazil and South Korea, respectively. The contrast between the two processes of technical change prove remarkable, despite both processes being essentially confined to learning. The concepts of passive and active NLSs show how useful they are to apprehend the diversity of those realities, and, consequently, to avoid, for instance, interpretations that misleadingly suppose (based on conventional economic theory) that those countries have a similar lack of technological dynamism | national learning systems;brazil;late industrializing economies;technological change;national innovation system;south korea |
|
train_120 | Self-organized critical traffic in parallel computer networks | In a recent paper, we analysed the dynamics of traffic flow in a simple, square lattice architecture. It was shown that a phase transition takes place between a free and a congested phase. The transition point was shown to exhibit optimal information transfer and wide fluctuations in time, with scale-free properties. In this paper, we further extend our analysis by considering a generalization of the previous model in which the rate of packet emission is regulated by the local congestion perceived by each node. As a result of the feedback between traffic congestion and packet release, the system is poised at criticality. Many well-known statistical features displayed by Internet traffic are recovered from our model in a natural way | internet traffic;parallel computer networks;wide fluctuations;phase transition;scale-free properties;packet release;square lattice architecture;statistical features;self-organized critical traffic;congested phase;optimal information transfer;traffic flow dynamics;packet emission;free phase;transition point;generalization |
|
train_1200 | From continuous recovery to discrete filtering in numerical approximations of | conservation laws Modern numerical approximations of conservation laws rely on numerical dissipation as a means of stabilization. The older, alternative approach is the use of central differencing with a dose of artificial dissipation. In this paper we review the successful class of weighted essentially non-oscillatory finite volume schemes which comprise sophisticated methods of the first kind. New developments in image processing have made new devices possible which can serve as highly nonlinear artificial dissipation terms. We view artificial dissipation as discrete filter operation and introduce several new algorithms inspired by image processing | conservation laws;continuous recovery;discrete filter operation;image processing;numerical approximations;central differencing;artificial dissipation;numerical dissipation;finite volume schemes;discrete filtering;highly nonlinear artificial dissipation terms |
|
train_1201 | Moving into the mainstream [product lifecycle management] | Product lifecycle management (PLM) is widely recognised by most manufacturing companies, as manufacturers begin to identify and implement targeted projects intended to deliver return-on investment in a timely fashion. Vendors are also releasing second-generation PLM products that are packaged, out-of-the-box solutions | product data management;product lifecycle management;manufacturing companies;enterprise resource planning;product development |
|
train_1202 | More than the money [software project] | Experiences creating budgets for large software projects have taught manufacturers that it is not about the money - it is about what one really needs. Before a company can begin to build a budget for a software. project, it has to have a good understanding of what business issues need to be addressed and what the business objectives are. This step is critical because it defines the business goals, outlines the metrics for success, sets the scope for the project, and defines the criteria for selecting the right software | budgeting;management;software requirements;software projects;manufacturing industry |
|
train_1203 | Technology decisions 2002 | The paper looks at the critical hardware, software, and services choices manufacturers are making as they begin to emerge from the recession and position themselves for the future | manufacturing industries;customer relationship management;information technology;management of change;enterprise resource planning;services choices |
|
train_1204 | Design and prototype of a performance tool interface for OpenMP | This paper proposes a performance tools interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to runtime performance tools. We present our design using a source-level instrumentation approach based on OpenMP directive rewriting. Rules to instrument each directive and their combination are applied to generate calls to the interface consistent with directive semantics and to pass context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new OpenMP directives. To prototype the proposed OpenMP performance interface, we have developed compatible performance libraries for the EXPERT automatic event trace analyzer [17, 18] and the TAU performance analysis framework [13]. The directive instrumentation transformations we define are implemented in a source-to-source translation tool called OPARI. Application examples are presented for both EXPERT and TAU to show the OpenMP performance interface and OPARI instrumentation tool in operation. When used together with the MPI profiling interface (as the examples also demonstrate), our proposed approach provides a portable and robust solution to performance analysis of OpenMP and mixed-mode (OpenMP + MPI) applications | opari;openmp directive rewriting;source-to-source translation tool;expert automatic event trace analyzer;mpi profiling interface;directive semantics;performance tool interface;source-level instrumentation approach;arbitrary code regions;tau performance analysis framework;parallel programming;api;performance libraries |
|
train_1205 | HPCVIEW: a tool for top-down analysis of node performance | It is increasingly difficult for complex scientific programs to attain a significant fraction of peak performance on systems that are based on microprocessors with substantial instruction-level parallelism and deep memory hierarchies. Despite this trend, performance analysis and tuning tools are still not used regularly by algorithm and application designers. To a large extent, existing performance tools fail to meet many user needs and are cumbersome to use. To address these issues, we developed HPCVIEW - a toolkit for combining multiple sets of program profile data, correlating the data with source code, and generating a database that can be analyzed anywhere with a commodity Web browser. We argue that HPCVIEW addresses many of the issues that have limited the usability and the utility of most existing tools. We originally built HPCVIEW to facilitate our own work on data layout and optimizing compilers. Now, in addition to daily use within our group, HPCVIEW is being used by several code development teams in DoD and DoE laboratories as well as at NCSA | peak performance;binary analysis;software tools;instruction-level parallelism;performance analysis;data layout;commodity web browser;source code;deep memory hierarchies;complex scientific programs;optimizing compilers;node performance;hpcview;top-down analysis |
|
train_1206 | The MAGNeT toolkit: design, implementation and evaluation | The current trend in constructing high-performance computing systems is to connect a large number of machines via a fast interconnect or a large-scale network such as the Internet. This approach relies on the performance of the interconnect (or Internet) to enable fast, large-scale distributed computing. A detailed understanding of the communication traffic is required in order to optimize the operation of the entire system. Network researchers traditionally monitor traffic in the network to gain the insight necessary to optimize network operations. Recent work suggests additional insight can be obtained by also monitoring traffic at the application level. The Monitor for Application-Generated Network Traffic toolkit (MAGNeT) we describe here monitors application traffic patterns in production systems, thus enabling more highly optimized networks and interconnects for the next generation of high-performance computing systems | internet;interconnects;virtual supercomputing;traffic characterization;optimized networks;monitor for application-generated network traffic toolkit;high-performance computing;magnet;computational grids;high-performance computing systems;network protocol |
|
train_1207 | Packet spacing: an enabling mechanism for delivering multimedia content in | computational grids Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-effort traffic is served by congestion-controlled TCP. Consequently, UDP steals bandwidth from TCP such that TCP flows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address this problem, we introduce the counter-intuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over 50% without adversely affecting delivered throughput | udp;internet;transport protocols;inter-packet spacing;streaming multimedia;udp-based streaming;distributed systems;remote computational steering;visualization data;network protocol |
|
train_1208 | A Virtual Test Facility for the simulation of dynamic response in materials | The Center for Simulating Dynamic Response of Materials at the California Institute of Technology is constructing a virtual shock physics facility for studying the response of various target materials to very strong shocks. The Virtual Test Facility (VTF) is an end-to-end, fully three-dimensional simulation of the detonation of high explosives (HE), shock wave propagation, solid material response to pressure loading, and compressible turbulence. The VTF largely consists of a parallel fluid solver and a parallel solid mechanics package that are coupled together by the exchange of boundary data. The Eulerian fluid code and Lagrangian solid mechanics model interact via a novel approach based on level sets. The two main computational packages are integrated through the use of Pyre, a problem solving environment written in the Python scripting language. Pyre allows application developers to interchange various computational models and solver packages without recompiling code, and it provides standardized access to several data visualization engines and data input mechanisms. In this paper, we outline the main components of the VTF, discuss their integration via Pyre, and describe some recent accomplishments in large-scale simulation using the VTF | problem solving environment;compressible turbulence;python scripting language;shock physics simulation;pressure loading;parallel fluid solver;pyre;parallel solid mechanics;solid material response;virtual test facility;data visualization;high explosives;virtual shock physics facility;shock wave propagation |
|
train_1209 | High-level language support for user-defined reductions | The optimized handling of reductions on parallel supercomputers or clusters of workstations is critical to high performance because reductions are common in scientific codes and a potential source of bottlenecks. Yet in many high-level languages, a mechanism for writing efficient reductions remains surprisingly absent. Further, when such mechanisms do exist, they often do not provide the flexibility a programmer needs to achieve a desirable level of performance. In this paper, we present a new language construct for arbitrary reductions that lets a programmer achieve a level of performance equal to that achievable with the highly flexible, but low-level combination of Fortran and MPI. We have implemented this construct in the ZPL language and evaluate it in the context of the initialization of the NAS MG benchmark. We show a 45 times speedup over the same code written in ZPL without this construct. In addition, performance on a large number of processors surpasses that achieved in the NAS implementation showing that our mechanism provides programmers with the needed flexibility | clusters of workstations;reductions;parallel supercomputers;language construct;parallel programming;scientific computing |
|
train_121 | Formula-dependent equivalence for compositional CTL model checking | We present a polytime computable state equivalence that is defined with respect to a given CTL formula. Since it does not attempt to preserve all CTL formulas, like bisimulation does, we can expect to compute coarser equivalences. This equivalence can be used to reduce the complexity of model checking a system of interacting FSM. Additionally, we show that in some cases our techniques can detect if a formula passes or fails, without forming the entire product machine. The method is exact and fully automatic, and handles full CTL | interacting fsm;formula-dependent equivalence;computation tree logic;ctl model checking;automatic method;compositional minimization;formal design verification;complexity reduction;coarse equivalence;ctl formula;polytime computable state equivalence |
|
train_1210 | Adaptive optimizing compilers for the 21st century | Historically, compilers have operated by applying a fixed set of optimizations in a predetermined order. We call such an ordered list of optimizations a compilation sequence. This paper describes a prototype system that uses biased random search to discover a program-specific compilation sequence that minimizes an explicit, external objective function. The result is a compiler framework that adapts its behavior to the application being compiled, to the pool of available transformations, to the objective function, and to the target machine. This paper describes experiments that attempt to characterize the space that the adaptive compiler must search. The preliminary results suggest that optimal solutions are rare and that local minima are frequent. If this holds true, biased random searches, such as a,genetic algorithm, should find good solutions more quickly than simpler strategies, such as hill climbing | compilers;configurable compilers;compilation sequence;biased random search;optimizing compilers;adaptive compiler;optimizations |
|
train_1211 | Hybrid decision tree | In this paper, a hybrid learning approach named hybrid decision tree (HDT) is proposed. HDT simulates human reasoning by using symbolic learning to do qualitative analysis and using neural learning to do subsequent quantitative analysis. It generates the trunk of a binary HDT according to the binary information gain ratio criterion in an instance space defined by only original unordered attributes. If unordered attributes cannot further distinguish training examples falling into a leaf node whose diversity is beyond the diversity-threshold, then the node is marked as a dummy node. After all those dummy nodes are marked, a specific feedforward neural network named FANNC that is trained in an instance space defined by only original ordered attributes is exploited to accomplish the learning task. Moreover, this paper distinguishes three kinds of incremental learning tasks. Two incremental learning procedures designed for example-incremental learning with different storage requirements are provided, which enables HDT to deal gracefully with data sets where new data are frequently appended. Also a hypothesis-driven constructive induction mechanism is provided, which enables HDT to generate compact concept descriptions | data sets;hypothesis-driven constructive induction;storage requirements;neural learning;incremental learning;fannc;hybrid learning approach;qualitative analysis;quantitative analysis;reasoning;symbolic learning;feedforward neural network;hybrid decision tree;binary information gain ratio criterion |
|
train_1212 | TCRM: diagnosing tuple inconsistency for granulized datasets | Many approaches to granularization have been presented for knowledge discovery. However, the inconsistent tuples that exist in granulized datasets are hardly ever revealed. We developed a model, tuple consistency recognition model (TCRM) to help efficiently detect inconsistent tuples for datasets that are granulized. The main outputs of the developed model include explored inconsistent tuples and consumed processing time. We further conducted an empirical test where eighteen continuous real-life datasets granulized by the equal width interval technique that embedded S-plus histogram binning algorithm (SHBA) and largest binning size algorithm (LBSA) binning algorithms were diagnosed. Remarkable results: almost 40% of the granulized datasets contain inconsistent tuples and 22% have the amount of inconsistent tuples more than 20% | tuple inconsistency;processing time;relational database;granularization;granulized datasets;large database;knowledge discovery;largest binning size algorithm;tcrm;sql;equal width interval technique;s-plus histogram binning algorithm;tuple consistency recognition model |
|
train_1213 | A knowledge intensive multi-agent framework for cooperative/collaborative | design modeling and decision support of assemblies Multi-agent modeling has emerged as a promising discipline for dealing with the decision making process in distributed information system applications. One of such applications is the modeling of distributed design or manufacturing processes which can link up various designs or manufacturing processes to form a virtual consortium on a global basis. This paper proposes a novel knowledge intensive multi-agent cooperative/collaborative framework for concurrent intelligent design and assembly planning, which integrates product design, design for assembly, assembly planning, assembly system design, and assembly simulation subjected to econo-technical evaluations. An AI protocol based method is proposed to facilitate the integration of intelligent agents for assembly design, planning, evaluation and simulation processes. A unified class of knowledge intensive Petri nets is defined using the OO knowledge-based Petri net approach and used as an AI protocol for handling both the integration and the negotiation problems among multi-agents. The detailed cooperative/collaborative mechanism and algorithms are given based on the knowledge object cooperation formalisms. As such, the assembly-oriented design system can easily be implemented under the multi-agent-based knowledge-intensive Petri net framework with concurrent integration of multiple cooperative knowledge sources and software. Thus, product design and assembly planning can be carried out simultaneously and intelligently in an entirely computer-aided concurrent design and assembly planning system | agent negotiation;knowledge intensive multi-agent framework;collaborative design modeling;concurrent intelligent design;distributed information system applications;cooperative framework;knowledge intensive petri nets;design for assembly;knowledge object cooperation;assembly simulation;ai protocol;assembly planning;distributed design;product design;decision support;virtual consortium |
|
train_1214 | Multi-agent collaboration for B2B workflow monitoring | Business-to-business (B2B) application environments are exceedingly dynamic and competitive. This dynamism is manifested in the form of changing process requirements and time constraints. However, current workflow management technologies have difficulties trying to solve problems, such as: how to deal with the dynamic nature of B2B commerce processes, how to manage the distributed knowledge and recourses, and how to reduce the transaction risk. In this paper, a collaborative multi-agent system is proposed. Multiple intelligent agents in our system can work together not only to identify the workflow problems, but also to solve such problems, by applying business rules, such as re-organizing the procurement and the transaction processes, and making necessary workflow process changes | business rules;internet;electronic commerce;b2b workflow monitoring;workflow management;changing process requirements;business-to-business applications;time constraints;transaction risk;multi-agent collaboration |
|
train_1215 | A knowledge-based approach for business process reengineering, SHAMASH | We present an overview of SHAMASH, a process modelling tool for business process reengineering. The main features that differentiate it from most current related tools are its ability to define and use organisation standards, functional structure, and develop automatic model simulation and optimisation. SHAMASH is a knowledge-based system, and we include a discussion on how knowledge acquisition takes place. Furthermore, we introduce a high level description of the architecture, the conceptual model, and other important modules of the system | process modelling tool;automatic model simulation;knowledge acquisition;knowledge-based approach;shamash;conceptual model;business process reengineering;knowledge-based system;functional structure;optimisation;organisation standards |
|
train_1216 | Knowledge flow management for distributed team software development | Cognitive cooperation is often neglected in current team software development processes. This issue becomes more important than ever when team members are globally distributed. This paper presents a notion of knowledge flow and the related management mechanism for realizing an ordered knowledge sharing and cognitive cooperation in a geographically distributed team software development process. The knowledge flow can carry and accumulate knowledge when it goes through from one team member to another. The coordination between the knowledge flow process and the workflow process of a development team provides a new way to improve traditional team software development processes. A knowledge grid platform has been implemented to support the knowledge flow management across the Internet | knowledge grid platform;internet;knowledge flow management;software development management;knowledge flow representation;workflow process;cognitive cooperation;ordered knowledge sharing;cooperative work;distributed team software development |
|
train_1217 | A knowledge-based approach for managing urban infrastructures | This paper presents a knowledge e-based approach dedicated to the efficient management, regulation, interactive and dynamic monitoring of urban infrastructures. This approach identifies the data and related treatments common to several municipal activities and defines the requirements and functionalities of the computer tools developed to improve the delivery and coordination of municipal services to the population. The resulting cooperative system called SIGIU is composed of a set of integrated operating systems (SYDEX) and the global planning and coordination system (SYGEC). The objective is to integrate the set of SYDEX and the SYGEC into a single coherent system for all the SIGIU's users according to their tasks, their roles, and their responsibilities within the municipal administration. SIGIU is provided by different measurement and monitoring instruments installed on some system's elements to be supervised. In this context, the information can be presented in different forms: video, pictures, data and alarms. One of SIGIU's objectives is the real-time management of urban infrastructures' control mechanisms. To carry out this process, the alarm control agent creates a mobile agent associated with the alarm, which is sent to a mobile station and warns an operator. Preliminary implementation results show that SIGIU supports effectively and efficiently the decision making process related to managing urban infrastructures | intelligent decision support system;global planning system;sigiu;municipal activities;regulation;video;multi-agent systems;mobile agent;sygec;knowledge-based approach;urban infrastructure management;dynamic monitoring;real-time management;alarm control agent;sydex;cooperative system;coordination system;integrated operating systems;urban planning |
|
train_1218 | Knowledge acquisition for expert systems in accounting and financial problem | domains Since the mid-1980s, expert systems have been developed for a variety of problems in accounting and finance. The most commonly cited problems in developing these systems are the unavailability of the experts and knowledge engineers and difficulties with the rule extraction process. Within the field of artificial intelligence, this has been called the 'knowledge acquisition' (KA) problem and has been identified as a major bottleneck in the expert system development process. Recent empirical research reveals that certain KA techniques are significantly more efficient than others in helping to extract certain types of knowledge within specific problem domains. This paper presents a mapping between these empirical studies and a generic taxonomy of expert system problem domains. To accomplish this, we first examine the range of problem domains and suggest a mapping of accounting and finance tasks to a generic problem domain taxonomy. We then identify and describe the most prominent KA techniques employed in developing expert systems in accounting and finance. After examining and summarizing the existing empirical KA work, we conclude by showing how the empirical KA research in the various problem domains can be used to provide guidance to developers of expert systems in the fields of accounting and finance | problem domain taxonomy;finance;knowledge acquisition;expert systems;artificial intelligence;accounting;rule extraction process |
|
train_1219 | Knowledge organisation of product design blackboard systems via graph | decomposition Knowledge organisation plays an important role in building a knowledge-based product design blackboard system. Well-organised knowledge sources will facilitate the effectiveness and efficiency of communication and data exchange in a blackboard system. In a previous investigation, an approach for constructing blackboard systems for product design using a non-directed graph decomposition algorithm was proposed. In this paper, the relationship between graph decomposition and the resultant blackboard system is further studied. A case study of a number of hypothetical blackboard systems that comprise different knowledge organisations is provided | case study;knowledge-based product design;product design blackboard systems;knowledge organisation;graph decomposition;data exchange |
|
train_122 | A formal framework for viewpoint consistency | Multiple viewpoint models of system development are becoming increasingly important. Each viewpoint offers a different perspective on the target system and system development involves parallel refinement of the multiple views. Viewpoint related approaches have been considered in a number of different guises by a spectrum of researchers. Our work particularly focuses on the use of viewpoints in open distributed processing (ODP) which is an ISO/ITU standardisation framework. The requirements of viewpoint modelling in ODP are very broad and, hence, demanding. Multiple viewpoints, though, prompt the issue of consistency between viewpoints. This paper describes a very general interpretation of consistency which we argue is broad enough to meet the requirements of consistency in ODP. We present a formal framework for this general interpretation; highlight basic properties of the interpretation and locate restricted classes of consistency. Strategies for checking consistency are also investigated. Throughout we illustrate our theory using the formal description technique LOTOS. Thus, the paper also characterises the nature of and options for consistency checking in LOTOS | viewpoint consistency;open distributed processing;formal description technique;iso/itu standardisation framework;system development;odp;consistency checking;development models;formal framework;lotos;multiple viewpoint models;process algebra |
|
train_1220 | Modeling discourse in collaborative work support systems: a knowledge | representation and configuration perspective Collaborative work processes usually raise a lot of intricate debates and negotiations among participants, whereas conflicts of interest are inevitable and support for achieving consensus and compromise is required. Individual contributions, brought up by parties with different backgrounds and interests, need to be appropriately structured and maintained. This paper presents a model of discourse acts that participants use to communicate their attitudes to each other, or affect the attitudes of others, in such environments. The first part deals with the knowledge representation and communication aspects of the problem, while the second one, in the context of an already implemented system, namely HERMES, with issues related to the configuration of the contributions asserted at each discourse instance. The overall work focuses on the machinery needed in a computer-assisted collaborative work environment, the aim being to further enhance the human-computer interaction | human-computer interaction;compromise;consensus;conflicts of interest;hermes;knowledge representation;knowledge communication;collaborative work support systems;discourse modeling |
|
train_1221 | An approach to developing computational supports for reciprocal tutoring | This study presents a novel approach to developing computational supports for reciprocal tutoring. Reciprocal tutoring is a collaborative learning activity, where two participants take turns to play the role of a tutor and a tutee. The computational supports include scaffolding tools for the tutor and a computer-simulated virtual participant. The approach, including system architecture, implementations of scaffolding tools for the tutor and of a virtual participant is presented herein. Furthermore, a system for reciprocal tutoring is implemented as an example of the approach | collaborative learning;computer-simulated virtual participant;intelligent tutoring system;system architecture;reciprocal tutoring computational support;scaffolding tools |
|
train_1222 | Mining the optimal class association rule set | We define an optimal class association rule set to be the minimum rule set with the same predictive power of the complete class association rule set. Using this rule set instead of the complete class association rule set we can avoid redundant computation that would otherwise be required for mining predictive association rules and hence improve the efficiency of the mining process significantly. We present an efficient algorithm for mining the optimal class association rule set using an upward closure property of pruning weak rules before they are actually generated. We have implemented the algorithm and our experimental results show that our algorithm generates the optimal class association rule set, whose size is smaller than 1/17 of the complete class association rule set on average, in significantly less time than generating the complete class association rule set. Our proposed criterion has been shown very effective for pruning weak rules in dense databases | dense databases;data mining;experimental results;upward closure property;predictive association rules;optimal class association rule set mining;predictive power;relational database;redundant computation;weak rule pruning;minimum rule set |
|
train_1223 | Formalising optimal feature weight setting in case based diagnosis as linear | programming problems Many approaches to case based reasoning (CBR) exploit feature weight setting algorithms to reduce the sensitivity to distance functions. We demonstrate that optimal feature weight setting in a special kind of CBR problems can be formalised as linear programming problems. Therefore, the optimal weight settings can be calculated in polynomial time instead of searching in exponential weight space using heuristics to get sub-optimal settings. We also demonstrate that our approach can be used to solve classification problems | classification;heuristics;linear programming;searching;polynomial time;case based diagnosis;optimal feature weight setting;case based reasoning;distance functions;exponential weight space |
|
train_1224 | Formalization of weighted factors analysis | Weighted factors analysis (WeFA) has been proposed as a new approach for elicitation, representation, and manipulation of knowledge about a given problem, generally at a high and strategic level. Central to this proposal is that a group of experts in the area of the problem can identify a hierarchy of factors with positive or negative influences on the problem outcome. The tangible output of WeFA is a directed weighted graph called a WeFA graph. This is a set of nodes denoting factors that can directly or indirectly influence an overall aim of the graph. The aim is also represented by a node. Each directed arc is a direct influence of one factor on another. A chain of directed arcs indicates an indirect influence. The influences may be identified as either positive or negative. For example, sales and costs are two factors that influence the aim of profitability in an organization. Sales has a positive influence on profitability and costs has a negative influence on profitability. In addition, the relative significance of each influence is represented by a weight. We develop Binary WeFA which is a variant of WeFA where the factors in the graph are restricted to being either true or false. Imposing this restriction on a WeFA graph allows us to be more precise about the meaning of the graph and of reasoning in it. Binary WeFA is a new proposal that provides a formal yet sufficiently simple language for logic-based argumentation for use by business people in decision-support and knowledge management. Whilst Binary WeFA is expressively simpler than other logic-based argumentation formalisms, it does incorporate a novel formalization of the notion of significance | organization;weighted factors analysis;significance;wefa graph;directed arc;logic-based argumentation;knowledge manipulation;reasoning;knowledge representation;knowledge management;knowledge elicitation;decision-support;profitability;binary wefa;directed weighted graph |
|
train_1225 | BT voices its support for IP | BTexact's chief technology officer, Mick Reeve, gives his views on the future for voice over DSL services and virtual private networks, and defends the slow rollout of public access WLANs | virtual private networks;btexact;public access wlans;voice over dsl |
|
train_1226 | Temp IT chief rallies troops [Mori] | The appointment of a highly qualified interim IT manager enabled market research company Mori to rapidly restructure its IT department. Now the resulting improvements are allowing it to support an increasing role for technology in the assimilation and analysis of market research | interim it manager;mori;market research company |
|
train_1227 | Will new Palms win laurels.? | PalmSource's latest operating system for mobile devices harnesses the ARM architecture to support more powerful business software, but there are concerns over compatibility with older applications | compatibility;palmsource;arm architecture;mobile devices;palm os 5.0;operating system |
|
train_1228 | Outsourced backup saves time | To increase the efficiency of its data backup and to free staff to concentrate on core business, The Gadget Shop is relying on a secure, automated system hosted by a third party | the gadget shop;outsourced;data backup;e-business |
|
train_1229 | Dot-Net makes slow progress | Microsoft's Windows .Net Enterprise Server Release Candidate I, which was released at the end of last month, provides an early glimpse of the system that will eventually replace Windows 200 Advanced Server. The software has been improved so that Active Directory is more flexible and easier to deploy; and security, scalability and management have also been enhanced | scalability;windows .net enterprise server;security;active directory |
|
train_123 | A new identification approach for FIR models | The identification of stochastic discrete systems disturbed with noise is discussed in this brief. The concept of general prediction error (GPE) criterion is introduced for the time-domain estimate with optimal frequency estimation (OFE) introduced for the frequency-domain estimate. The two estimation methods are combined to form a new identification algorithm, which is called the empirical frequency-domain optimal parameter (EFOP) estimate, for the finite impulse response (FIR) model interfered by noise. The algorithm theoretically provides the global optimum of the model frequency-domain estimate. Some simulation examples are given to illustrate the new identification method | identification approach;stochastic discrete systems;optimal frequency estimation;frequency-domain estimate;fir models;empirical frequency-domain optimal parameter estimate;time-domain estimate;general prediction error criterion |
|
train_1230 | Server safeguards tax service | Peterborough-based tax consultancy IE Taxguard wanted real-time failover protection for important Windows-based applications. Its solution was to implement a powerful failover server from UK supplier Neverfail in order to provide real-time backup for three core production servers | ie taxguard;neverfail;backup;tax consultancy;failover server |
|
train_1231 | Efficient parallel programming on scalable shared memory systems with High | Performance Fortran OpenMP offers a high-level interface for parallel programming on scalable shared memory (SMP) architectures. It provides the user with simple work-sharing directives while it relies on the compiler to generate parallel programs based on thread parallelism. However, the lack of language features for exploiting data locality often results in poor performance since the non-uniform memory access times on scalable SMP machines cannot be neglected. High Performance Fortran (HPF), the de-facto standard for data parallel programming, offers a rich set of data distribution directives in order to exploit data locality, but it has been mainly targeted towards distributed memory machines. In this paper we describe an optimized execution model for HPF programs on SMP machines that avails itself with mechanisms provided by OpenMP for work sharing and thread parallelism, while exploiting data locality based on user-specified distribution directives. Data locality does not only ensure that most memory accesses are close to the executing threads and are therefore faster, but it also minimizes synchronization overheads, especially in the case of unstructured reductions. The proposed shared memory execution model for HPF relies on a small set of language extensions, which resemble the OpenMP work-sharing features. These extensions, together with an optimized shared memory parallelization and execution model, have been implemented in the ADAPTOR HPF compilation system and experimental results verify the efficiency of the chosen approach | scalable hardware;scalable shared memory;multiprocessor architectures;shared memory multiprocessor;high performance fortran;parallel programming |
|
train_1232 | Techniques for compiling and implementing all NAS parallel benchmarks in HPF | The NAS parallel benchmarks (NPB) are a well-known benchmark set for high-performance machines. Much effort has been made to implement them in High-Performance Fortran (HPF). In previous attempts, however, the HPF versions did not include the complete set of benchmarks, and the performance was not always good. In this study, we implement all eight benchmarks of the NPB in HPF, and parallelize them using an HPF compiler that we have developed. This report describes the implementation techniques and compiler features necessary to achieve good performance. We evaluate the HPF version on the Hitachi SR2201, a distributed-memory parallel machine. With 16 processors, the execution time of the HPF version is within a factor of 1.5 of the hand-parallelized version of the NPB 2.3 beta | hpf compiler;compiler;nas parallel benchmarks;distributed-memory parallel supercomputers;high-performance machines |
|
train_1233 | Advanced optimization strategies in the Rice dHPF compiler | High-Performance Fortran (HPF) was envisioned as a vehicle for modernizing legacy Fortran codes to achieve scalable parallel performance. To a large extent, today's commercially available HPF compilers have failed to deliver scalable parallel performance for a broad spectrum of applications because of insufficiently powerful compiler analysis and optimization. Substantial restructuring and hand-optimization can be required to achieve acceptable performance with an HPF port of an existing Fortran application, even for regular data-parallel applications. A key goal of the Rice dHPF compiler project has been to develop optimization techniques that enable a wide range of existing scientific applications to be ported easily to efficient HPF with minimal restructuring. This paper describes the challenges to effective parallelization presented by complex (but regular) data-parallel applications, and then describes how the novel analysis and optimization technologies in the dHPF compiler address these challenges effectively, without major rewriting of the applications. We illustrate the techniques by describing their use for parallelizing the NAS SP and BT benchmarks. The dHPF compiler generates multipartitioned parallelizations of these codes that are approaching the scalability and efficiency of sophisticated hand-coded parallelizations | legacy fortran codes;compiler optimization;automatic parallelization;rice dhpf compiler;multipartitioning;parallel performance;compiler analysis;hpf compilers;mgh-performance fortran |
|
train_1234 | Achieving performance under OpenMP on ccNUMA and software distributed shared | memory systems OpenMP is emerging as a viable high-level programming model for shared memory parallel systems. It was conceived to enable easy, portable application development on this range of systems, and it has also been implemented on cache-coherent Non-Uniform Memory Access (ccNUMA) architectures. Unfortunately, it is hard to obtain high performance on the latter architecture, particularly when large numbers of threads are involved. In this paper, we discuss the difficulties faced when writing OpenMP programs for ccNUMA systems, and explain how the vendors have attempted to overcome them. We focus on one such system, the SGI Origin 2000, and perform a variety of experiments designed to illustrate the impact of the vendor's efforts. We compare codes written in a standard, loop-level parallel style under OpenMP with alternative versions written in a Single Program Multiple Data (SPMD) fashion, also realized via OpenMP, and show that the latter consistently provides superior performance. A carefully chosen set of language extensions can help us translate programs from the former style to the latter (or to compile directly, but in a similar manner). Syntax for these extensions can be borrowed from HPF, and some aspects of HPF compiler technology can help the translation process. It is our expectation that an extended language, if well compiled, would improve the attractiveness of OpenMP as a language for high-performance computation on an important class of modern architectures | hpf;shared memory parallel systems;cache-coherent non-uniform memory access;programming model;openmp;parallel programming;single program multiple data |
|
train_1235 | Finding performance bugs with the TNO HPF benchmark suite | High-Performance Fortran (HPF) has been designed to provide portable performance on distributed memory machines. An important aspect of portable performance is the behavior of the available HPF compilers. Ideally, a programmer may expect comparable performance between different HPF compilers, given the same program and the same machine. To test the performance portability between compilers, we have designed a special benchmark suite, called the TNO HPF benchmark suite. It consists of a set of HPF programs that test various aspects of efficient parallel code generation. The benchmark suite consists of a number of template programs that are used to generate test programs with different array sizes, alignments, distributions, and iteration spaces. It ranges from very simple assignments to more complex assignments such as triangular iteration spaces, convex iteration spaces, coupled subscripts, and indirection arrays. We have run the TNO HPF benchmark suite on three compilers: the PREPARE prototype compiler, the PGI-HPF compiler, and the GMD Adaptor HPF compiler. Results show performance differences that can be quite large (up to two orders of magnitude for the same test program). Closer inspection reveals that the origin of most of the differences in performance is due to differences in local enumeration and storage of distributed array elements | compiler optimizations;benchmark suite;portable performance;parallel compilers;high-performance fortran;performance portability;hpf compilers;distributed memory machines |
|
train_1236 | Compatibility comparison and performance evaluation for Japanese HPF compilers | using scientific applications The lack of compatibility of High-Performance Fortran (HPF) between vender implementations has been disheartening scientific application users so as to hinder the development of portable programs. Thus parallel computing is still unpopular in the computational science community, even though parallel programming is common to the computer science community. As users would like to run the same source code on parallel machines with different architectures as fast as possible, we have investigated the compatibility of source codes for Japanese HPF compilers (NEC, Fujitsu and Hitachi) with two real-world applications: a 3D fluid code and a 2D particle code. We have found that the source-level compatibility between Japanese HPF compilers is almost preserved, but more effort will be needed to sustain complete compatibility. We have also evaluated parallel performance and found that HPF can achieve good performance for the 3D fluid code with almost the same source code. For the 2D particle code, good results have also been obtained with a small number of processors, but some changes in the original source code and the addition of interface blocks is required | hpf;compilers;high-performance fortran;source compatability;parallel performance;portable programs;parallel programming |
|
train_1237 | High-performance numerical pricing methods | The pricing of financial derivatives is an important field in finance and constitutes a major component of financial management applications. The uncertainty of future events often makes analytic approaches infeasible and, hence, time-consuming numerical simulations are required. In the Aurora Financial Management System, pricing is performed on the basis of lattice representations of stochastic multidimensional scenario processes using the Monte Carlo simulation and Backward Induction methods, the latter allowing for the exploitation of shared-memory parallelism. We present the parallelization of a Backward Induction numerical pricing kernel on a cluster of SMPs using HPF+, an extended version of High-Performance Fortran. Based on language extensions for specifying a hierarchical mapping of data onto an SMP cluster, the compiler generates a hybrid-parallel program combining distributed-memory and shared-memory parallelism. We outline the parallelization strategy adopted by the VFC compiler and present an experimental evaluation of the pricing kernel on an NEC SX-5 vector supercomputer and a Linux SMP cluster, comparing a pure MPI version to a hybrid-parallel MPI/OpenMP version | financial management;aurora financial management system;numerical pricing kernel;backward induction methods;investment strategies;finance;monte carlo simulation;derivative pricing;stochastic processes;pricing |
|
train_1238 | Optimization of element-by-element FEM in HPF 1.1 | In this study, Poisson's equation is numerically evaluated by the element-by-element (EBE) finite-element method in a parallel environment using HPF 1.1 (High-Performance Fortran). In order to achieve high parallel efficiency, the data structures have been altered to node-based data instead of mixtures of node- and element-based data, representing a node-based EBE finite-element scheme (nEBE). The parallel machine used in this study was the NEC SX-4, and experiments were performed on a single node having 32 processors sharing common memory. The HPF compiler used in the experiments is HPF/SX Rev 2.0 released in 1997 (unofficial), which supports HPF 1.1. Models containing approximately 200 000 and 1,500,000 degrees of freedom were analyzed in order to evaluate the method. The calculation time, parallel efficiency, and memory used were compared. The performance of HPF in the conjugate gradient solver for the large model, using the NEC SX-4 compiler option-noshrunk, was about 85% that of the message passing interface | hpf;message passing;hpf compiler;conjugate gradient solver;finite element method;parallel programs;poisson equation;element-by-element |
|
train_1239 | Three-dimensional global MHD simulation code for the Earth's magnetosphere | using HPF/JA We have translated a three-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A three-dimensional global MHD simulation of the Earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5% using 56 processing elements of the Fujitsu VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran | parallel computation;hpf mhd code;mhd simulation;fujitsu vpp5000/56;vector-parallel supercomputer;magnetohydrodynamic simulation |
|
train_124 | High-speed CMOS circuits with parallel dynamic logic and speed-enhanced skewed | static logic In this paper, we describe parallel dynamic logic (PDL) which exhibits high speed without charge sharing problem. PDL uses only parallel-connected transistors for fast logic evaluation and is a good candidate for high-speed low-voltage operation. It has less back-bias effect compared to other logic styles, which use stacked transistors. Furthermore, PDL needs no signal ordering or tapering. PDL with speed-enhanced skewed static logic renders straightforward logic synthesis without the usual area penalty due to logic duplication. Our experimental results on two 32-bit carry lookahead adders using 0.25- mu m CMOS technology show that PDL with speed-enhanced skewed static (SSS) look reduces the delay over clock-delayed(CD)-domino by 15%-27% and the power-delay product by 20%-37% | 32 bit;parallel-connected transistors;logic synthesis;parallel dynamic logic;low-voltage operation;speed-enhanced skewed static logic;0.25 micron;back-bias effect;stacked transistors;power-delay product;delay;high-speed cmos circuits;carry lookahead adders |
|
train_1240 | Implementation and evaluation of HPF/SX V2 | We are developing HPF/SX V2, a High Performance Fortran (HPF) compiler for vector parallel machines. It provides some unique extensions as well as the features of HPF 2.0 and HPF/JA. In particular, this paper describes four of them: (1) the ON directive of HPF 2.0; (2) the REFLECT and LOCAL directives of HPF/JA; (3) vectorization directives; and (4) automatic parallelization. We evaluate these features through some benchmark programs on NEC SX-5. The results show that each of them achieved a 5-8 times speedup in 8-CPU parallel execution and the four features are useful for vector parallel execution. We also evaluate the overall performance of HPF/SX V2 by using over 30 well-known benchmark programs from HPFBench, APR Benchmarks, GENESIS Benchmarks, and NAS Parallel Benchmarks. About half of the programs showed good performance, while the other half suggest weakness of the compiler, especially on its runtimes. It is necessary to improve them to put the compiler to practical use | benchmark;compiler;hpf/sx v2;parallelization;vector parallel machines;high performance fortran compiler |
|
train_1241 | Code generator for the HPF Library and Fortran 95 transformational functions | One of the language features of the core language of HPF 2.0 (High Performance Fortran) is the HPF Library. The HPF Library consists of 55 generic functions. The implementation of this library presents the challenge that all data types, data kinds, array ranks and input distributions need to be supported. For instance, more than 2 billion separate functions are required to support COPY-SCATTER fully. The efficient support of these billions of specific functions is one of the outstanding problems of HPF. We have solved this problem by developing a library generator which utilizes the mechanism of parameterized templates. This mechanism allows the procedures to be instantiated at compile time for arguments with a specific type, kind, rank and distribution over a specific processor array. We describe the algorithms used in the different library functions. The implementation gives the ease of generating a large number of library routines from a single template. The templates can be extended with special code for specific combinations of the input arguments. We describe in detail the implementation and performance of the matrix multiplication template for the Fujitsu VPP5000 platform | hpf;data types;parallel languages;code generation;library generator;matrix multiplication;parameterized templates;library functions;generic functions;high performance fortran;hpf library;parallel computing |
|
train_1242 | VPP Fortran and the design of HPF/JA extensions | VPP Fortran is a data parallel language that has been designed for the VPP series of supercomputers. In addition to pure data parallelism, it contains certain low-level features that were designed to extract high performance from user programs. A comparison of VPP Fortran and High-Performance Fortran (HPF) 2.0 shows that these low-level features are not available in HPF 2.0. The features include asynchronous interprocessor communication, explicit shadow, and the LOCAL directive. They were shown in VPP Fortran to be very useful in handling real-world applications, and they have been included in the HPF/JA extensions. They are described in the paper. The HPF/JA Language Specification Version 1.0 is an extension of HPF 2.0 to achieve practical performance for real-world applications and is a result of collaboration in the Japan Association for HPF (JAHPF). Some practical programming and tuning procedures with the HPF/JA Language Specification are described, using the NAS Parallel Benchmark BT as an example | high performance;benchmark;data parallel language;vpp fortran;asynchronous communication;data locality;asynchronous interprocessor communication;explicit shadow;data parallelism |
|
train_1243 | HPF/JA: extensions of High Performance Fortran for accelerating real-world | applications This paper presents a set of extensions on High Performance Fortran (HPF) to make it more usable for parallelizing real-world production codes. HPF has been effective for programs that a compiler can automatically optimize efficiently. However, once the compiler cannot, there have been no ways for the users to explicitly parallelize or optimize their programs. In order to resolve the situation, we have developed a set of HPF extensions (HPF/JA) to give the users more control over sophisticated parallelization and communication optimizations. They include parallelization of loops with complicated reductions, asynchronous communication, user-controllable shadow, and communication pattern reuse for irregular remote data accesses. Preliminary experiments have proved that the extensions are effective at increasing HPF's usability | hpf;compiler;data parallel language;parallel processing;high performance fortran;parallel programming;parallelization of loops;supercomputer |
|
train_1244 | Applied ethics in business information units | The primary thesis of this paper is that business information professionals commonly overlook ethical dilemmas in the workplace. Although the thesis remains unproven, the author highlights, by way of real and hypothetical case studies, a number of situations in which ethical tensions can be identified, and suggests that information professionals need to be more aware of the moral context of their actions. Resolving ethical dilemmas should be one of the aims of competent information professionals and their managers, although it is recognized that dilemmas often cannot easily be resolved. A background to the main theories of applied ethics forms the framework for later discussion | moral context;ethical dilemmas;business information professionals;applied ethics;business information units |
|
train_1245 | A brief guide to competitive intelligence: how to gather and use information on | competitors The author outlines the processes involved in competitive intelligence, and discusses what it is, how to do it and gives examples of what happens when companies fail to monitor their competitive environment effectively. The author presents a case study, showing how the company that produced the pre-cursor to the Barbie doll failed to look at their business environment and how this led to the firm's failure. The author discusses what competitive intelligence is, and what it is not, and why it is important for businesses, and presents three models used to describe the competitive intelligence process, going through the various steps involved in defining intelligence requirements and collecting, analyzing, communicating and utilizing competitive intelligence | intelligence analysis;intelligence collection;intelligence utilization;intelligence communication;barbie doll;competitor information;competitive intelligence;business environment |
|
train_1246 | Why information departments are becoming academic | This article outlines the increasing convergence between academia and business over the last decade or so, and the mutual benefits that this closer association has brought. It also looks at the growing importance of the information profession, suggesting that this is leading to a greater need for specialist skills, as reflected by the rise in academic courses in this area. However, it argues that increasing specialization must not lead to insularity; if information professionals are truly concerned with gaining a competitive advantage, they must not close their minds to the potential benefits of working with external, non specialist, partners. The benefits that business has reaped from academia, it is contended, suggest that this may also be a fruitful avenue for information departments to explore | information departments;academic courses;information science;universities;business;academia;information profession;specialist skills |
|
train_1247 | The changing landscape for multi access portals | Discusses the factors that have made life difficult for consumer portal operators in recent years causing them, like others in the telecommunications, media and technology sector, to take a close look at their business models following the dot.com crash and the consequent reassessment of Internet-related project financing by the venture capital community. While the pressure is on to generate income from existing customers and users, portal operators must reach new markets and find realistic revenue streams. This search for real revenues has led to a move towards charging for content, a strategy being pursued by a large number of horizontal portal players, including MSN and Terra Lycos. This trend is particularly noticeable in China, where Chinadotcom operates a mainland portal and plans a range of fee-based services, including electronic mail. The nature of advertising itself is changing, with portals seeking blue-chip sponsorship and marketing deals that span a number of years. Players are struggling to redefine and reinvent themselves as a result of the changing environment and even the term "portal" is believed to be obsolete, partly due to its dot.com crash associations. Multi-access portals are expected to dominate the consumer sector, becoming bigger and better overall than their predecessors and playing a more powerful role in the consumer environment | multi-access portals;blue-chip sponsorship;fee-based services;consumer portal operators;revenue streams;advertising |
|
train_1248 | Public business libraries: the next chapter | Traces the history of the provision of business information by Leeds Public Libraries, UK, from the opening of the Public Commercial and Technical Library in 1918 to the revolutionary impact of the Internet in the 1990s. Describes how the Library came to terms with the need to integrate the Internet into its mainstream business information services, with particular reference to its limitations and to the provision of company information, market research, British Standards information, press cuttings and articles from specialized trade and scientific journals, and patents information. Focuses on some of the reasons why the public business library is still needed as a service to businesses, even after the introduction of the Internet and considers the Library's changing role and the need to impress on all concerned, especially government, the continuing value of these services. Looks to the partnerships formed by the Library over the years and the ways in which these are expected to assist in realizing future opportunities, in particular, the fact that all public libraries in England gained free Internet access at the end of 2001. Offers some useful ideas about how the Library could develop, noting that SINTO, a Sheffield based information network formed in 1938 and originally a partnership between the public library, the two Sheffield universities and various leading steel companies of the time, is being examined as a model for future services in Leeds. Concludes that the way forward can be defined in terms of five actions: redefinition of priorities; marketing; budgets; resources; and the use of information technology (IT) | trade journal articles;press cuttings;steel companies;market research;leeds public libraries;business information services;sheffield universities;budgets;sinto;government;it use;public business libraries;internet;public commercial and technical library;patents information;british standards information;resources;priority redefinition;scientific journal articles;marketing;history;company information;information network |
|
train_1249 | Aggregators versus disintermediators: battling it out in the information | superhighstreet Perhaps the future of large-scale content aggregators is now no longer in doubt but this was not the case 10 years ago, when many leading industry experts were much more pessimistic in their predictions. In the year that Dialog celebrates its thirtieth anniversary as the world's oldest and largest professional online information service, it is appropriate to look back at these changing perceptions, the reasons for these changes, and why the experts got it wrong. We also look at the present day; the value that large-scale content aggregators bring to the information supply chain; and we discuss why users would choose to use aggregators as opposed to going directly to the publishers | information supply chain;disintermediators;large-scale content aggregators;online information service |
|
train_125 | A fast implementation of correlation of long data sequences for coherent | receivers Coherent reception depends upon matching of phase between the transmitted and received signal. Fast convolution techniques based on fast Fourier transform (FFT) are widely used for extracting time delay information from such matching. The latency in processing a large data window of the received signal is a serious overhead for mission critical real time applications. The implementation of a parallel algorithm for correlation of long data sequences in multiprocessor environment is demonstrated here. The algorithm does processing while acquiring the received signal and reduces the computation overhead considerably because of inherent parallelism | computation;mission critical real time applications;transmitted signal;received signal;parallel algorithm;time delay information;latency;multiprocessor environment;correlation;fast fourier transform;coherent receivers;long data sequences |
|
train_1250 | The impact and implementation of XML on business-to-business commerce | This paper discusses the impact analysis of the Extensible Markup Language (XML). Each business partner within a supply chain will be allowed to generate its own data exchange format by adopting an XML meta-data management system in the local side. Followed after a brief introduction of the information technology for Business to Customer (B2C) and Business to Business (B2B) Electronic Commerce (EC), the impact of XML on the tomorrow business world is discussed. A real case study for impact analysis on information exchange platform, Microsoft's BizTalk platform which is actually an XML schema builder and the implementation of XML commerce application will provide an interest insight for users' future implementation | business to customer;electronic commerce;xml schema builder;business to business;xml;electronic data interchange;enterprise resources planning;extensible markup language;biztalk |
|
train_1251 | A synergic analysis for Web-based enterprise resources planning systems | As the central nervous system for managing an organization's mission and critical business data, Enterprise Resource Planning (ERP) system has evolved to become the backbone of e-business implementation. Since an ERP system is multimodule application software that helps a company manage its important business functions, it should be versatile enough to automate every aspect of business processes, including e-business | enterprise resource planning;customer relationship management;web-based enterprise resources planning;erp;synergic analysis;e-business |
|
train_1252 | An agent-oriented and service-oriented environment for deploying dynamic | distributed systems This paper presents JASE, a Java-based Agent-oriented and Service-oriented Environment for deploying dynamic distributed systems. JASE utilizes two important concepts in the field of distributed computing: the concept of services and remote programming with mobile agents. In JASE, mobile agents are used to support applications, and service interface agents are used to wrap services. Service inter-face agents can dynamically register their services in Service Server. Mobile agent locates a specific service interface agent by submitting requests to the Service Server with descriptions of required services. JASE uses XML to describe both service descriptions and the mobile agent's queries. JASE supports two kinds of communication facility: tuple space and asynchronous messages. In this paper, the design and implementation of JASE are described. An application shows the suitability and the effectiveness of the JASE and performance evaluation is also made. Finally, related work and some conclusions are given | service-oriented environment;remote programming;dynamic distributed systems;performance evaluation;mobile agents;agent-oriented environment;jase;java-based agent-oriented and service-oriented environment |
|
train_1253 | Application of XML for neural network exchange | This article introduces a framework for the interchange of trained neural network models. An XML-based language (Neural Network Markup Language) for the neural network model description is offered. It allows to write down all the components of neural network model which are necessary for its reproduction. We propose to use XML notation for the full description of neural models, including data dictionary, properties of training sample, preprocessing methods, details of network structure and parameters and methods for network output interpretation | network output interpretation;data dictionary;preprocessing methods;xml;neural network exchange;network structure;neural network markup language |
|
train_1254 | Supporting unified interface to wrapper generator in integrated information | retrieval Given the ever-increasing scale and diversity of information and applications on the Internet, improving the technology of information retrieval is an urgent research objective. Retrieved information is either semi-structured or unstructured in format and its sources are extremely heterogeneous. In consequence, the task of efficiently gathering and extracting information from documents can be both difficult and tedious. Given this variety of sources and formats, many choose to use mediator/wrapper architecture, but its use demands a fast means of generating efficient wrappers. In this paper, we present a design for an automatic eXtensible Markup Language (XML)-based framework with which to generate wrappers rapidly. Wrappers created with this framework support a unified interface for a meta-search information retrieval system based on the Internet Search Service using the Common Object Request Broker Architecture (CORBA) standard. Greatly advantaged by the compatibility of CORBA and XML, a user can quickly and easily develop information-gathering applications, such as a meta-search engine or any other information source retrieval method. The two main things our design provides are a method of wrapper generation that is fast, simple, and efficient, and a wrapper generator that is CORBA and XML-compliant and that supports a unified interface | internet;integrated information retrieval;corba;meta-search engine;unified interface;wrapper generator;automatic extensible markup language |
|
train_1255 | Succession in standardization: grafting XML onto SGML | Succession in standardization is often a problem. The advantages of improvements must be weighed against those of compatibility. If compatibility considerations dominate, a grafting process takes place. According to our taxonomy of succession, there are three types of outcomes. A Type I succession, where grafting is successful, entails compatibility between successors, technical paradigm compliance and continuity in the standards trajectory. In this paper, we examine issues of succession and focus on the Extensible Markup Language (XML). It was to be grafted on the Standard Generalized Markup Language (SGML), a stable standard since 1988. However, XML was a profile, a subset and an extension of SGML (1988). Adaptation of SGML was needed (SGML 1999) to forge full (downward) compatibility with XML (1998). We describe the grafting efforts and analyze their outcomes. Our conclusion is that although SGML was a technical exemplar for XML developers, full compatibility was not achieved. The widespread use of HyperText Mark-up Language (HTML) exemplified the desirability of simplicity in XML, standardization. This and HTML's user market largely explain the discontinuity in SGML-XML succession | type i succession;sgml;standard generalized markup language;grafting process;standardization;xml;extensible markup language |
|
train_1256 | High-speed consistency checking for hypothetical reasoning systems using | inference path network Hypothetical reasoning is popular in fault diagnostics and design systems, but slow reasoning speed is its drawback. The goal of the current study is developing hypothetical reasoning based on an inference path network, which would overcome this drawback. In hypothetical reasoning systems based on an inference path network, there is much room for improvement regarding the computing costs of connotation processing and consistency checking. The authors of this study demonstrate improvement ideas regarding one of these problems, namely, consistency checking. First, the authors obtained necessary and sufficient conditions under which inconsistencies occur during hypothesis composition. Based on the obtained results, the authors proposed an algorithm for speeding up the process of consistency checking. Processing with this algorithm in its core consists of transforming the inference path network in such a way that inconsistencies do not occur during the hypothesis composition, under the condition of unchanged solution hypotheses. The efficiency of this algorithm was confirmed by tests | fault diagnostics;hypothesis composition;hypothetical reasoning;reasoning speed;inconsistencies;high-speed consistency checking;speed up;inference path network |
|
train_1257 | Definition of a similarity measure between cases based on auto/cross-fuzzy | thesauri A similarity measure between cases is needed in order to evaluate the degree of similarity when using past similar cases in order to resolve current problems. In similar case retrieval, multiple indices are set up in order to characterize the queries and individual cases, then terms are given as values to each. The similarity measure between cases commonly used is defined using the rate at which the values provided from the corresponding indices match. In practice, however, values cannot be expected to be mutually exclusive. As a result, a natural expansion of this approach is to have relationships in which mutually similar meanings are reflected in the similarity measure between cases. In this paper the authors consider an auto-fuzzy thesaurus which gives the relationship for values between corresponding indices and a cross-fuzzy thesaurus which gives the relationship for values between mutually distinct indices, then defines a similarity measure between cases which considers the relationship of index values based on these thesauri. This definition satisfies the characteristics required for the operation of case-based retrieval even when one value is not necessarily given in the index. Finally, using a test similar case retrieval system, the authors perform a comparative analysis of the proposed similarity measure between cases and a conventional approach | case similarity measure;problem solving;relationship indices;case-based retrieval;mutually distinct indices;similar case retrieval;auto-fuzzy thesaurus;decision making support system;corresponding indices;cross-fuzzy thesaurus |
|
train_1258 | Implementation and performance evaluation of a FIFO queue class library for | time warp The authors describe the implementation, use, and performance evaluation of a FIFO queue class library by means of a high-performance, easy-to-use interface employed for queuing simulations in parallel discrete simulations based on the time warp method. Various general-purpose simulation libraries and languages have been proposed, and among these some have the advantage of not requiring users to define anything other than the state vector, and not needing awareness of rollback under a platform which performs state control based on copies. However, because the state vectors must be defined as simple data structures without pointers, dynamic data structures such as a FIFO queue cannot be handled directly. Under the proposed class library, both the platform and the user can handle such structures in the same fashion that embedded data structures are handled. In addition, instead of all stored data, just the operational history can be stored and recovered efficiently at an effectively minimal cost by taking advantage of the first-in-first-out characteristics of the above data structures. When the kernel deletes past state histories during a simulation, garbage collection is also performed transparently using the corresponding method | object oriented method;embedded data structures;general-purpose simulation libraries;fifo queue;simulation languages;state management;operational history;performance evaluation;first-in-first-out characteristics;state vectors;time warp simulation;queuing simulations;dynamic data structures;garbage collection;easy-to-use interface;class library;parallel discrete simulations |
|
train_1259 | A mechanism for inferring approximate solutions under incomplete knowledge | based on rule similarity This paper proposes an inference method which can obtain an approximate solution even if the knowledge stored in the problem-solving system is incomplete. When a rule needed for solving the problem does not exist, the problem can be solved by using rules similar to the existing rules. In an implementation using the SLD procedure, a resolution is executed between a subgoal and a rule if an atom of the subgoal is similar to the consequence atom of the rule. Similarities between atoms are calculated using a knowledge base of words with account of the reasoning situation, and the reliability of the derived solution is calculated based on these similarities. If many solutions are obtained, they are grouped into classes of similar solutions and a representative solution is then selected for each class. The proposed method was verified experimentally by solving simple problems | word knowledge base;problem solving;inference method;subgoal atom;reliability;sld procedure;reasoning;consequence atom;rule similarity;incomplete knowledge;representative solution;approximate solution;common sense knowledge |
|
train_126 | A new architecture for implementing pipelined FIR ADF based on classification | of coefficients In this paper, we propose a new method for implementing pipelined finite-impulse response (FIR) adaptive digital filter (ADF), with an aim of reducing the maximum delay of the filtering portion of conventional delayed least mean square (DLMS) pipelined ADF. We achieve a filtering section with a maximum delay of one by simplifying a pre-upsampled and a post-downsampled FIR filter using the concept of classification of coefficients. This reduction is independent of the order of the filter, which is an advantage when the order of the filter is very large, and as a result the method can also be applied to infinite impulse response (IIR) filters. Furthermore, when the proposed method is compared with the transpose ADF, which has a filtering section with zero delay, it is realized that it significantly reduces the maximum delay associated with updating the coefficients of FIR ADF. The effect of this is that, the proposed method exhibits a higher convergence speed in comparison to the transpose FIR ADF | pre-upsampled filter;pipelined fir adf;coefficient classification;adaptive digital filter;convergence speed;post-downsampled filter;maximum delay;delayed least mean square filter |
|
train_1260 | A dataflow computer which accelerates execution of sequential programs by | precedent firing instructions In the dataflow machine, it is important to avoid degradation of performance in sequential processing, and it is important from the viewpoint of hardware scale to reduce the number of waiting operands. This paper demonstrates that processing performance is degraded by sequential processing in the switching process, and presents a method of remedy. Precedent firing control is proposed as a means of remedy, and it is shown by a simulation that the execution time and the total number of waiting operands can be reduced by the precedent firing control. Then the hardware scale is examined as an evaluation of precedent firing control | hardware scale;processing performance;execution time;execution acceleration;precedent firing control;precedent firing instructions;sequential programs;computer architecture;switching process;waiting operands;parallel processing;dataflow computer |
|
train_1261 | Topology-adaptive modeling of objects using surface evolutions based on 3D | mathematical morphology Level set methods were proposed mainly by mathematicians for constructing a model of a 3D object of arbitrary topology. However, those methods are computationally inefficient due to repeated distance transformations and increased dimensions. In the paper, we propose a new method of modeling fast objects of arbitrary topology by using a surface evolution approach based on mathematical morphology. Given sensor data covering the whole object surface, the method begins with an initial approximation of the object by evolving a closed surface into a model topologically equivalent to the real object. The refined approximation is then performed using energy minimization. The method has been applied in several experiments using range data, and the results are reported in the paper | energy minimization;arbitrary topology;surface evolutions;topology-adaptive modeling;initial approximation;3d mathematical morphology;repeated distance transformations;level set methods;3d object;pseudo curvature flow;refined approximation;range data |
|
train_1262 | The development and evaluation of SHOKE2000: the PCI-based FPGA card | This paper describes a PCI-based FPGA card, SHOKE2000, which was developed in order to study reconfigurable computing. Since the latest field programmable gate arrays (FPGA) consist of input/output (I/O) configurable blocks as well as internal configurable logic blocks, they not only realize various user logic circuits but also connect with popular I/O standards easily. These features enable FPGA to connect several devices with different interfaces, and thus new reconfigurable systems would be realizable by connecting the FPGA with devices such as digital signal processors (DSP) and analog devices. This paper describes the basic functions of SHOKE2000, which was developed for realizing hybrid reconfigurable systems consisting of FPGA, DSP, and analog devices. We also present application examples of SHOKE2000, including a simple image recognition application, a distributed shared memory computer cluster, and teaching materials for computer education | user logic circuits;computer education;image recognition application;interfaces;dsp;i/o standard;pci;analog devices;reconfigurable computing;fpga;digital signal processors;fpga card;teaching materials;intellectual property;hybrid reconfigurable systems;shoke2000;field programmable gate arrays;distributed shared memory computer cluster |
|
train_1263 | Super high definition image (WHD: Wide/Double HD) transmission system | This paper describes a WHD image transmission system constructed from a display projector, CODECs, and a camera system imaging a super high definition image (WHD: Wide/Double HD) corresponding to two screen portions of common high-vision images. This system was developed as a transmission system to communicate with or transmit information giving a reality-enhanced feeling to a remote location by using images of super high definition. In addition, the correction processing for the distortions of images occurring due to the structure of the camera system, an outline of the transmission experiments using the proposed system, and subjective evaluation experiments using WHD images are presented | codecs;whd image transmission system;camera system imaging;reality-enhanced feeling;super high definition image transmission system |
|
train_1264 | Estimation of the vanishing point for automatic driving system using a cross | ratio This paper proposes a new method to estimate the vanishing point used as the vehicle heading, which is essential in automatic driving systems. The proposed method uses a cross ratio comprised of a ratio of lengths from four collinear points for extracting the edges that shape the vanishing point. Then, lines that intersect at one point are fitted to the edges in a Hough space. Consequently, the vanishing point is estimated robustly even when the lane markings are occluded by other vehicles. In the presence of lane markings, the road boundaries are also estimated at the same time. Experimental results from images of a real road scene show the effectiveness of the proposed method | automatic driving system;automatic driving systems;hough space;lane markings;vanishing point estimation;collinear points;cross ratio;real road scene |
|
train_1265 | Optimization of requantization parameter for MPEG transcoding | This paper considers transcoding in which an MPEG stream is converted to a low-bit-rate MPEG stream, and proposes a method in which the transcoding error can be reduced by optimally selecting the quantization parameter for each macroblock. In transcoding with a low compression ratio, it is crucial to prohibit transcoding with a requantization parameter which is 1 to 2 times the quantization parameter of the input stream. Consequently, as the first step, an optimization method for the requantization parameter is proposed which cares for the error propagation effect by interframe prediction. Then, the proposed optimization method is extended so that the method can also be applied to the case of a high compression ratio in which the rate-distortion curve is approximated for each macroblock in the range of requantization parameters larger than 2 times the quantization parameter. It is verified by a simulation experiment that the PSNR is improved by 0.5 to 0.8 dB compared to the case in which a 6 Mbit/s MPEG stream is not optimized by twofold recompression | twofold recompression;requantization parameter optimization;low-bit-rate mpeg stream;error propagation effect;macroblock;rate conversion;simulation;rate control;transcoding error;psnr;6 mbit/s;interframe prediction;compression ratio;rate-distortion curve |
|
train_1266 | An intelligent information gathering method for dynamic information mediators | The Internet is spreading into our society rapidly and is becoming one of the information infrastructures that are indispensable for our daily life. In particular, the WWW is widely used for various purposes such as sharing personal information, academic research, business work, and electronic commerce, and the amount of available information is increasing rapidly. We usually utilize information sources on the Internet as individual stand-alone sources, but if we can integrate them, we can add more value to each of them. Hence, information mediators, which integrate information distributed on the Internet, are drawing attention. In this paper, under the assumption that the information sources to be integrated are updated frequently and asynchronously, we propose an information gathering method that constructs an answer to a query from a user, accessing information sources to be integrated properly within an allowable time period. The proposed method considers the reliability of data in the cache and the quality of answer in order to efficiently access information sources and to provide appropriate answers to the user. As evaluation, we show the effectiveness of the proposed method by using an artificial information integration problem, in which some parameters can be modified, and a real-world flight information service compared with a conventional FIFO information gathering method | internet;electronic commerce;academic research;business work;dynamic information mediators;real-world flight information service;information infrastructures;artificial information integration problem;intelligent information gathering method;www |
|
train_1267 | 3D reconstruction from uncalibrated-camera optical flow and its reliability | evaluation We present a scheme for reconstructing a 3D structure from optical flow observed by a camera with an unknown focal length in a statistically optimal way as well as evaluating the reliability of the computed shape. First, the flow fundamental matrices are optimally computed from the observed flow. They are then decomposed into the focal length, its rate of change, and the motion parameters. Next, the flow is optimally corrected so that it satisfies the epipolar equation exactly. Finally, the 3D positions are computed, and their covariance matrices are evaluated. By simulations and real-image experiments, we test the performance of our system and observe how the normalization (gauge) for removing indeterminacy affects the description of uncertainty | covariance matrices;3d reconstruction;real-image experiments;epipolar equation;reliability evaluation;normalization;flow fundamental matrices;motion parameters;uncalibrated-camera optical flow |
|
train_1268 | Reachability in contextual nets | Contextual nets, or Petri nets with read arcs, are models of concurrent systems with context dependent actions. The problem of reachability in such nets consists in finding a sequence of transitions that leads from the initial marking of a given contextual net to a given goal marking. The solution to this problem that is presented in this paper consists in constructing a finite complete prefix of the unfolding of the given contextual net, that is a finite prefix in which all the markings that are reachable from the initial marking are present, and in searching in each branch of this prefix for the goal marking by solving an appropriate linear programming problem | linear programming;concurrent systems;contextual nets reachability;finite prefix;context dependent actions;goal marking;petri nets |
|
train_1269 | Minimizing the number of successor states in the stubborn set method | Combinatorial explosion which occurs in parallel compositions of LTSs can be alleviated by letting the stubborn set method construct on-the-fly a reduced LTS that is CFFD- or CSP-equivalent to the actual parallel composition. This article considers the problem of minimizing the number of successor states of a given state in the reduced LTS. The problem can be solved by constructing an and/or-graph with weighted vertices and by finding a set of vertices that satisfies a certain constraint such that no set of vertices satisfying the constraint has a smaller sum of weights. Without weights, the and/or-graph can be constructed in low-degree polynomial time w.r.t. the length of the input of the problem. However, since actions can be nondeterministic and transitions can share target states, it is not known whether the weights are generally computable in polynomial time. Consequently, it is an open problem whether minimizing the number of successor states is as "easy" as minimizing the number of successor transitions | combinatorial explosion;weighted vertices;csp-equivalence;stubborn set method;low-degree polynomial time |
|
train_127 | Asymptotical stability in discrete-time neural networks | In this work, we present a proof of the existence of a fixed point and a generalized sufficient condition that guarantees the stability of it in discrete-time neural networks by using the Lyapunov function method. We also show that for both symmetric and asymmetric connections, the unique attractor is a fixed point when several conditions are satisfied. This is an extended result of Chen and Aihara (see Physica D, vol. 104, no. 3/4, p. 286-325, 1997). In particular, we further study the stability of equilibrium in discrete-time neural networks with the connection weight matrix in form of an interval matrix. Finally, several examples are shown to illustrate and reinforce our theory | connection weight matrix;unique attractor;symmetric connections;asymptotical stability;discrete-time neural networks;lyapunov function method;equilibrium stability;fixed point;asymmetric connections;interval matrix;generalized sufficient condition;stability |
|
train_1270 | A comparison of different decision algorithms used in volumetric storm cells | classification Decision algorithms useful in classifying meteorological volumetric radar data are discussed. Such data come from the radar decision support system (RDSS) database of Environment Canada and concern summer storms created in this country. Some research groups used the data completed by RDSS for verifying the utility of chosen methods in volumetric storm cells classification. The paper consists of a review of experiments that were made on the data from RDSS database of Environment Canada and presents the quality of particular classifiers. The classification accuracy coefficient is used to express the quality. For five research groups that led their experiments in a similar way it was possible to compare received outputs. Experiments showed that the support vector machine (SVM) method and rough set algorithms which use object oriented reducts for rule generation to classify volumetric storm data perform better than other classifiers | support vector machine;rough set algorithms;meteorological volumetric radar data;decision algorithms;summer storms;radar decision support system;object oriented reducts;volumetric storm cells classification;classification accuracy |
|
train_1271 | Verification of non-functional properties of a composable architecture with | Petri nets In this paper, we introduce our concept of composability and present the MSS architecture as an example for a composable architecture. MSS claims to be composable with respect to timing properties. We discuss, how to model and prove properties in such an architecture with time-extended Petrinets. As a result, the first step of a proof of composability is presented as well as a new kind of Petri net, which is more suitable for modeling architectures like MSS | composable architecture;mss architecture;timing properties;petri nets;non-functional properties verification;proof of composability |
|
train_1272 | Global action rules in distributed knowledge systems | Previously Z. Ras and J.M. Zytkow (2000) introduced and investigated query answering system based on distributed knowledge mining. The notion of an action rule was introduced by Z. Ras and A. Wieczorkowska (2000) and its application domain e-business was taken. In this paper, we generalize the notion of action rules in a similar way to handling global queries. Mainly, when values of attributes for a given customer, used in action rules, can not be easily changed by business user, definitions of these attributes are extracted from other sites of a distributed knowledge system. To be more precise, attributes at every site of a distributed knowledge system are divided into two sets: stable and flexible. Values of flexible attributes, for a given consumer, sometime can be changed and this change can be influenced and controlled by a business user. However, some of these changes (for instance to the attribute "profit') can not be done directly to a chosen attribute. In this case, definitions of such an attribute in terms of other attributes have to be learned. These new definitions are used to construct action rules showing what changes in values of flexible attributes, for a given consumer, are needed in order to re-classify this consumer the way business user wants. But, business user may be either unable or unwilling to proceed with actions leading to such changes. In all such cases we may search for definitions of these flexible attributes looking at either local or remote sites for help | distributed knowledge mining;query answering system;attributes;global action rules;e-commerce;action rules |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.