Search Results

Search found 7081 results on 284 pages for 'idle processing'.

Page 91/284 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • PARTNER WEBCAST (June 4): Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco

    - by Zeynep Koch
    Live Webcast: Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco A webcast for resellers who sell Oracle workloads to customers  Wednesday, June 4, 2014, 8:00 AM PDT /11 AM EDT  Register today Nimble Storage SmartStack™ for Oracle provides pre-validated reference architecture that speed deployments and minimize risk.  IT and Oracle administrators and architects realize the importance of underlying Operating System, Virtualization software, and Storage in maintaining services levels and staying in budget.  In this webinar, you will learn how Nimble Storage SmartStack for Oracle provides a converged infrastructure for Oracle database online transaction processing (OLTP) and online analytical processing (OLAP) environments with Oracle Linux and Oracle VM. SmartStack delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or if you are running Oracle Real Application Clusters (RAC) on multiple nodes. Nimble Storage SmartStack for Oracle with Cisco can help you provide: Improved Oracle performance Stress-free data protection and DR of your Oracle database Higher availability and uptime Accelerate Oracle development and improve testing All for dramatically less than what you’re paying now Presenters: Doan Nguyen, Senior Principal Product Marketing Director, Oracle Vanessa Scott , Business Development Manager, Cisco Ibrahim “Ibby” Rahmani, Product and Solutions Marketing, Nimble Storage Join this event to learn from our Nimble Storage and Oracle experts on how to optimize your customers' Oracle environments. Register today to learn more!

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • How do you deal with translating theory into practice?

    - by Mr. Shickadance
    Hello all! Being a computer scientist in a research field I am often tasked with working alongside professionals outside of the software domain (think math people, electrical engineer etc), and then translating their theories and ideas into real-world implementations. I often find it difficult when they present a theoretical problem which appears to be somewhat disconnected from reality. I am not saying that the theory is bogus, only that it is difficult to translate into real-world situations. For example, recently I have been working with software defined radios. We are exploring many different areas, but often the math specialists in my group would present a problem which is heavily grounded in theory (signal processing, physics, whatever). I often struggle at times where it is hard to draw direct parallels between the theory and the real-world implementation that I need to develop. Say we are working on an energy detector, the theory person in my group would say "you need to measure the noise variance with no signal present." This leads me to think "how the hell do I isolate noise from a signal in reality?" There are many examples, but I hope you see where I am going. So, my question is how does one deal with implementation of theoretical concepts when the theory seems detached from reality? Or at least when the connections are not so clear. Or perhaps, the person with the 'theory' may be ignorant of real restrictions? Note: I found this to be a hard question to ask - hopefully you are following me. If you have suggestions on how I could improve it, by all means let me know! Thanks for looking! EDIT: To be a bit more clear, I understand in situations like this that I must learn that specific domain myself to an extent (i.e. signal processing), but I am more concerned with when those theoretical concepts do not appear to be as grounded in practice as one would like.

    Read the article

  • Architecting Python application consisting of many small scripts

    - by Duke Dougal
    I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling it's inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • How to resolve - dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error? I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • How do I stop XNA/Visual Studio from rebuilding my content project every time I build?

    - by Phil Quinn
    My group and I are working on a game in XNA 4.0 with Visual Studio 2010/2012. The main solution has 6 projects: 2 XNA game projects (1 executable/ 1 class library), 1 WPF executable for the level editor, 2 standard class libraries, and a content project. Originally, the editor and engine XNA game projects had a content reference to separate content projects. Recently, I consolidated the content projects into one to simplify asset additions. Since pushing these changes to our git repo, certain members of my group have been experiencing weird build issues. Every time they run the project, they have to re-build all of the assets. This happens regardless of whether any changes were made, even if they just run the project directly after building. I've taken a few steps to figure out why this is happening. Below is the MSBuild output set on Normal verbosity. The seemingly important part is at 4, with the line 4> Rebuilding all content because build settings have changed 1>------ Build started: Project: Engine.Core, Configuration: Debug x86 ------ 1>Build started 11/29/2012 3:24:24 AM. 1>ResolveAssemblyReferences: 1> A TargetFramework profile exclusion list will be generated. 1>EmbedXnaFrameworkRuntimeProfile: 1>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 1>GenerateTargetFrameworkMonikerAttribute: 1>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 1>CoreCompile: 1>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 1>XnaWriteCacheFile: 1>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 1>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 1> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 1>_CopyAppConfigFile: 1>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 1>CopyFilesToOutputDirectory: 1> Engine.Core -> <solution-dir>\src\Engine.Core\bin\x86\Debug\TimeSink.Engine.Core.dll 1> 1>Build succeeded. 1> 1>Time Elapsed 00:00:00.13 2>------ Build started: Project: TimeSink.Entities, Configuration: Debug x86 ------ 2>Build started 11/29/2012 3:24:25 AM. 2>ResolveAssemblyReferences: 2> A TargetFramework profile exclusion list will be generated. 2>EmbedXnaFrameworkRuntimeProfile: 2>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 2>GenerateTargetFrameworkMonikerAttribute: 2>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 2>CoreCompile: 2>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 2>XnaWriteCacheFile: 2>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 2>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 2> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 2>CopyFilesToOutputDirectory: 2> TimeSink.Entities -> <solution-dir>\src\TimeSink.Entities\bin\x86\Debug\TimeSink.Entities.dll 2> 2>Build succeeded. 2> 2>Time Elapsed 00:00:00.11 3>------ Build started: Project: Editor (Editor\Editor), Configuration: Debug x86 ------ 4>------ Build started: Project: Engine.Game, Configuration: Debug x86 ------ 3>Build started 11/29/2012 3:24:25 AM. 3>CoreCompile: 3> All content is already up to date 3>ResolveAssemblyReferences: 3> A TargetFramework profile exclusion list will be generated. 3>EmbedXnaFrameworkRuntimeProfile: 3>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 3>GenerateTargetFrameworkMonikerAttribute: 3>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 3>CoreCompile: 3>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 3>XnaWriteCacheFile: 3>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 3>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 3> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 3>_CopyOutOfDateNestedContentItemsToOutputDirectory: 3>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 3>CopyFilesToOutputDirectory: 3> Editor -> <solution-dir>\src\Editor\Editor\bin\x86\Debug\Editor.dll 3> 3>Build succeeded. 3> 3>Time Elapsed 00:00:00.39 4>Build started 11/29/2012 3:24:25 AM. 4>CoreCompile: 4> Rebuilding all content because build settings have changed 4> Building Textures\circle.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Importing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Building Textures\giroux.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Importing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Building Textures\Body_Neutral.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Importing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Building font.spritefont -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4> Importing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.FontDescriptionImporter 4> Processing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.Processors.FontDescriptionProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4>ResolveAssemblyReferences: 4> A TargetFramework profile exclusion list will be generated. 4>EmbedXnaFrameworkRuntimeProfile: 4>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 4>GenerateTargetFrameworkMonikerAttribute: 4>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 4>CoreCompile: 4>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 4>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 4> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 4>_CopyOutOfDateNestedContentItemsToOutputDirectory: 4>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 4>_CopyAppConfigFile: 4>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 4>CopyFilesToOutputDirectory: 4> Engine.Game -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Engine.Game.exe 4>IncrementalClean: 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\circle.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\giroux.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Body_Neutral.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\font.xnb". 4> 4>Build succeeded. 4> 4>Time Elapsed 00:00:01.72 ========== Build: 4 succeeded, 0 failed, 1 up-to-date, 0 skipped ========== I can't think of how build settings could change between consecutive executions. Like I said, this only happens for half our group. One member is on a 32-bit Windows 7 Prof bootcamp partition on a Mac. Everyone else, including those who don't have the issue, are running straight 64-bit Windows 7 Prof. Both have tried using VS 2010 and VS 2012. Any insight would be greatly appreciated. Also, I can post more details upon request if this isn't thorough enough.

    Read the article

  • Cannot install icaclient due to problem with ia32-libs

    - by Marc
    I have a problem installing the package icaclient on 13.10 Saucy Salamander 64bit. It seems that there is a problem with ia32-libs and other dependencies. marc@PinballWizard:~$ sudo dpkg -i Downloads/icaclient_12.1.0_amd64.deb [sudo] password for marc: Selecting previously unselected package icaclient. (Reading database ... 179461 files and directories currently installed.) Unpacking icaclient (from .../icaclient_12.1.0_amd64.deb) ... dpkg: dependency problems prevent configuration of icaclient: icaclient depends on ia32-libs; however: Package ia32-libs is not installed. icaclient depends on lib32z1; however: Package lib32z1 is not installed. icaclient depends on lib32asound2; however: Package lib32asound2 is not installed. dpkg: error processing icaclient (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient Hence, other workarounds seem not to work. I followed the instructions here - and for the last two Ubuntu releases it was surely no problem. When I try to install ia32-libs I get the following issue: marc@PinballWizard:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 E: Package 'ia32-libs' has no installation candidate Is there any possibility to install icaclient? The sources.list is here.

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Efficient solution for multiplayer space partioning?

    - by DevilWithin
    This question is a little tricky, but I will try to make it clear, Lets say I am building an online game, not in a mmo scale, but gladly supporting as many players as possible, in a authoritative server approach, and I want really big worlds with lots of AI simulated enemies. I am aware of a few strategies to save server's CPU by subdividing the space and not processing what doesn't need processing. I 've already split the world by regions, that will require loading times and small transitions, which i think is important to mantain the quality of gameplay when playing locally (alone or even with a couple of friends) because the players won't normally be in more than one or two regions. But even a region can become pretty big, and have a lot of NPC simulating at a time, how do I handle this without screwing the player's experience? Approaches like one server per region and alike are not in the table. I am mainly looking for data structures to hold hordes of enemies, and even peaceful NPC. To finalize the question, please note that vehicles exist, therefore its considerably fast to travel within a region, influencing the "when" to cull areas. Sorry for the confusing question, thanks

    Read the article

  • Cannot reinstall MySql in 11.10 - ERROR: There's not enough space in /var/lib/mysql/

    - by Robin McCain
    I've tried it all (removing all the packages associated with MySQL) but keep getting stuff like this: Preconfiguring packages ... (Reading database ... 142196 files and directories currently installed.) Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_amd64.deb) ... ERROR: There's not enough space in /var/lib/mysql/ dpkg: error processing /var/cache/apt/archives/mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Here is my drive space map. root@kyle:/# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/kyle-root 59361428 59021768 0 100% / udev 1014052 8 1014044 1% /dev tmpfs 409304 1476 407828 1% /run none 5120 0 5120 0% /run/lock none 1023256 0 1023256 0% /run/shm /dev/sda1 233191 46888 173862 22% /boot /dev/md0 1922858288 1048513192 776669500 58% /media/array The root volume actually only has about 10 gigabytes in use on the hard drive (which has a 60 gig partition). /dev/md0 is a 2 TB raid array.

    Read the article

  • Dependencies lib32asound2 [duplicate]

    - by The Mini John
    This question already has an answer here: 'teamviewer depends on (…)' while trying to install TeamViewer 4 answers I was trying to install Teamviewer, but i was getting a dependencies error. I tried to install them but with no luck.. I think Mod's are not reading the questions trough when they mark as duplicate I'm getting this Error: Unpacking teamviewer (from teamviewer_linux_x64.deb) ... dpkg: dependency problems prevent configuration of teamviewer: teamviewer depends on lib32asound2; however: Package lib32asound2 is not installed. teamviewer depends on lib32z1; however: Package lib32z1 is not installed. teamviewer depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing teamviewer (--install): dependency problems - leaving unconfigured Errors were encountered while processing: teamviewer I tried sudo apt-get -f install but getting Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 Package lib32asound2 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lib32asound2' has no installation candidate E: Package 'ia32-libs' has no installation candidate i cant even get to the sudo dpkg -i teamviewer_linux_x64.deb If i force installation sudo dpkg --force-depends -i teamviewer_linux_x64.deb Although it's "Setting up Temviewer" it gives me this I'm fairly new to ubuntu, can anyone help me out ? I'm on Ubuntu 13.10

    Read the article

  • XNA extending the existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references?

    Read the article

  • Flash removal and installation issue

    - by Theo
    I'm having this issue trying to uninstall and/or upgrade the Adobe flash player plug-in. Here's what I've ran through the terminal: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.0.0-13-generic libgladeui-1-11 linux-headers-3.0.0-19-generic linux-headers-3.0.0-13 linux-headers-3.0.0-19 erlang-base Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: adobe-flashplugin 0 upgraded, 0 newly installed, 1 to remove and 2 not upgraded. 1 not fully installed or removed. After this operation, 10.2 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 375840 files and directories currently installed.) Removing adobe-flashplugin ... update-alternatives: error: no alternatives for iceape-flashplugin. update-alternatives: error: no alternatives for iceape-flashplugin. dpkg: error processing adobe-flashplugin (--remove): subprocess installed pre-removal script returned error exit status 2 No apport report written because MaxReports is reached already postinst called with argument `abort-remove' dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: adobe-flashplugin E: Sub-process /usr/bin/dpkg returned an error code (1) Please advise if you can. Let me know if there is any other info you need.

    Read the article

  • IDC and Becham Research: New analyst reports and webcast

    - by terrencebarr
    Embedded Java is getting a lot of attention in the analyst community these days. Check out these new analyst reports and a webcast by IDC as well as Beecham Research. IDC published a White Paper titled “Ghost in the Machine: Java for Embedded Development”, and an accompanying webcast recording. Highlights of the White Paper: The embedded systems industry is projected to continue to expand rapidly, reaching $2.1 trillion in 2015 The market for intelligent systems, where Java’s rich set of services are most needed, is projected to grow to 78% of all embedded systems in 2015  Java is widely used in embedded systems and is expected to continue to gain traction in areas where devices present an application platform for developers The free IDC webcast and White Paper can be accessed here. Beecham Research published a report titled “Designing an M2M Platform for the Connected World”. Highlights of the report: The total revenue for M2M Services is projected to double, from almost $15 billion in 2012 to over $30 billion in 2016 The primary driver for M2M solutions is now enabling new services Important trends that are developing are: Enterprise integration – more data and using the data more strategically, new markets in the Internet of Things (IoT), processing large amounts of data in real time (complex event processing) Using the same software development environment for all parts of an M2M solution is a major advantage if the software can be optimized for each part of the solution The free Beecham Research report can be accessed here. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: iot, Java Embedded, M2M, research, webcast

    Read the article

  • IDC and Becham Research: New analyst reports and webcast

    - by terrencebarr
    Embedded Java is getting a lot of attention in the analyst community these days. Check out these new analyst reports and a webcast by IDC as well as Beecham Research. IDC published a White Paper titled “Ghost in the Machine: Java for Embedded Development”, and an accompanying webcast recording. Highlights of the White Paper: The embedded systems industry is projected to continue to expand rapidly, reaching $2.1 trillion in 2015 The market for intelligent systems, where Java’s rich set of services are most needed, is projected to grow to 78% of all embedded systems in 2015  Java is widely used in embedded systems and is expected to continue to gain traction in areas where devices present an application platform for developers The free IDC webcast and White Paper can be accessed here. Beecham Research published a report titled “Designing an M2M Platform for the Connected World”. Highlights of the report: The total revenue for M2M Services is projected to double, from almost $15 billion in 2012 to over $30 billion in 2016 The primary driver for M2M solutions is now enabling new services Important trends that are developing are: Enterprise integration – more data and using the data more strategically, new markets in the Internet of Things (IoT), processing large amounts of data in real time (complex event processing) Using the same software development environment for all parts of an M2M solution is a major advantage if the software can be optimized for each part of the solution The free Beecham Research report can be accessed here. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: iot, Java Embedded, M2M, research, webcast

    Read the article

  • Cannot install extensions required for GNOME Shell themes

    - by Soham Chowdhury
    I keep getting this output: soham@fortress:~$ sudo apt-get install gnome-shell-extensions gnome-tweak-tool Reading package lists... Done Building dependency tree Reading state information... Done gnome-tweak-tool is already the newest version. The following NEW packages will be installed: gnome-shell-extensions 0 upgraded, 1 newly installed, 0 to remove and 43 not upgraded. 1 not fully installed or removed. Need to get 0 B/121 kB of archives. After this operation, 849 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179291 files and directories currently installed.) Unpacking gnome-shell-extensions (from .../gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb) ... dpkg: error processing /var/cache/apt/archives/gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb (--unpack): trying to overwrite '/usr/share/locale/lv/LC_MESSAGES/gnome-shell-extensions.mo', which is also in package gnome-shell-extensions-common 3.2.0-0ubuntu1~oneiric1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Update: Fixed that. Now GNOME Tweak Tool shows me an exclamation mark beside the extension enable button, saying "Extension doesn't support shell version". My GNOME shell is already the latest version. Help!

    Read the article

  • jQuery + Perl CGI to vb.net transition

    - by user1257458
    I've been developing oracle database-heavy "web applications" forever by building my html by hand, adding some jquery to handle ajax requests (html inserts for forms processing etc), and always did my server side stuff in perl cgi. I really love how easy it is to read some form input, execute some select statements through dbi (SO EASY), and generate HTML to be inserted by the jquery request. That's a web application to me. However, my new boss builds everything in visual studio 2010, vb.net, usually webforms. So, for work reasons, I now need to start developing in vb.net so it can be collectively maintained, and I'm just seeking advice on where to start learning/how to approach this. I know I could at least learn ASP.net and VB.net, and create a webform, have it read parameters, return HTML, etc. which would allow me to use my previously written HTML and client-side scripts (jQuery). Although- since we're moving heavily to mobile applications I really need to reduce client-side processing load. Is there any advantage to my boss' method? Thanks a ton.

    Read the article

  • Fix Package is in a very bad inconsistent state

    - by Benjamin Piller
    I can't update my system because it freezes while installing a third-party update (zramswap-enabler)!! Sometimes I get the following message in Update manager: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: E:The package zramswap-enabler needs to be reinstalled, but I can't find an archive for it. I tried to remove the zramswap-enabler, but it's impossible because I get the following message: dpkg: error processing zramswap-enabler (--remove): Package is in a very bad inconsistent state - you should reinstall it before attempting a removal. Errors were encountered while processing: zramswap-enabler E: Sub-process /usr/bin/dpkg returned an error code (1) Actually i would really reinstall that package, but it is unable to do it! If i remove this third-party PPA then the system is warning me about a very very serious problem. So why can i not install/reinstall/remove/update this package and why freezes the updater if i try to update? -S O L U T I O N-: Okay! Finally i found the solution for this Problem! Step 1. Make sure that your PPA is correctly Step 2. Remove the broken package via the following command: sudo dpkg --remove --force-remove-reinstreq zramswap-enabler Step 3. install the package again (sudo apt-get install zramswap-enabler) Step 4. After restart (not necessary) you are able to install the updates correctly! Actually you can fix any "Package is in a very bad inconsistent state” Issues with this solution!!!

    Read the article

  • fern-wifi-cracker "Exec format error" breaks packaging system

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Trace Mobile Service Serving 20,000 + Request Per Month

    - by Gopinath
    We introduced Trace Mobile Service in April 2010 and we are glad to announce that now the service is processing 20000 + per month. After a long time today I looked at the statistics and overwhelmed to see the number of trace requests processing by the service as 24282, 23781 and 18475 in the months of January 11, December 10 and November 10 respectively. Also I’m glad to announce that this service is contributes close to 10% of our revenues. Here is a table that provide stats for the past 7 months For those who don’t know about this service It is a tiny, yet very useful service for tracing information of Indian mobile phones. Usage of this service is very simple: enter any Indian mobile phone number and it will instantaneously let you know the location and the service provider of the mobile phone. Visit Trace Mobile Service or read Introducing “Trace Mobile Information” Service for more details This article titled,Trace Mobile Service Serving 20,000 + Request Per Month, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Unable to install Dockmanager

    - by Mark Rooney
    I have Docky installed on Ubuntu 10.10 64bit and noticed after a recent upgrade my 'Helpers' are no longer available. After some research I found that Dockmanager is no longer installed either. I am unable to install it via the Software centre or via terminal using apt-get, the following error is returned; mark@Sonata:~$ sudo apt-get install dockmanager Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: dockmanager 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/94.4kB of archives. After this operation, 430kB of additional disk space will be used. (Reading database ... 162015 files and directories currently installed.) Unpacking dockmanager (from .../dockmanager_0.1.0~bzr83-0ubuntu1~10.10~dockers1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/dockmanager_0.1.0~bzr83-0ubuntu1~10.10~dockers1_amd64.deb (--unpack): trying to overwrite '/usr/share/dockmanager/data/skype_invisible.svg', which is also in package faenza-icon-theme 0.8 dpkg-deb: subprocess paste killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/dockmanager_0.1.0~bzr83-0ubuntu1~10.10~dockers1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) mark@Sonata:~$ Can anyone advise on how to fix this?

    Read the article

  • Error with APE Server Installation

    - by sadmicrowave
    I was trying to install APE-Server from the .deb file at the ape-server homepage (www.ape-project.org) and I ran into an error so wanted to try removing the installation and reinstalling. I did a sudo apt-get remove ape-server which ran successfully but left ape-server folders in my /etc/ and /etc/init.d locations. Me being an idiot new comer to linux decided that manually delete those folders. Now when I reinstall the ape-server those folders don't get recreated and therefore I cannot send the /etc/init.d/ape-server [option] command because the folder is not found. When I try to sudo apt-get purge (or remove) ape-server I get the following sudo apt-get purge ape-server Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: ape-server* 0 upgraded, 0 newly installed, 1 to remove and 92 not upgraded. 1 not fully installed or removed. After this operation, 1,753kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 43924 files and directories currently installed.) Removing ape-server ... invoke-rc.d: unknown initscript, /etc/init.d/ape-server not found. dpkg: error processing ape-server (--purge): subprocess installed pre-removal script returned error exit status 100 update-rc.d: /etc/init.d/ape-server: file does not exist dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ape-server E: Sub-process /usr/bin/dpkg returned an error code (1) My question is; how do I remove all of the ape-server installation packages that were installed so I can reinstall from scratch?

    Read the article

  • Programmer + Drugs =? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Now I'm wondering: Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a programming concept, which originated from people who where drug-users? Do you know of a piece of code, which was written by someone under the influence? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >