Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 54/289 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Access Expression problem: it's too complex, so how do I turn it in to a function?

    - by Mike
    Access 2007 is telling me that my new expression is to complex. It used to work when we had 10 service levels, but now we have 19! Great! I've asked this question in SuperUser and someone suggested I try it over here. Suggestions are I turn it in to a function - but I'm not sure where to begin and what the function would look like. My expression is checking the COST of our services in the [PriceCharged] field and then assigning the appropriate HOURS [Servicelevel] when I perform a calculation to work out how much REVENUE each colleague has made when working for a client. The [EstimatedTime] field stores the actual hours each colleague has worked. [EstimatedTime]/[ServiceLevel]*[PriceCharged] Below is the breakdown of my COST to HOURS expression. I've put them on different lines to make it easier to read - please do not be put off by the length of this post, it's all the same info in the end. Many thanks,Mike ServiceLevel: IIf([pricecharged]=100(COST),6(HOURS), IIf([pricecharged]=200 Or [pricecharged]=210,12.5, IIf([pricecharged]=300,19, IIf([pricecharged]=400 Or [pricecharged]=410,25, IIf([pricecharged]=500,31, IIf([pricecharged]=600,37.5, IIf([pricecharged]=700,43, IIf([pricecharged]=800 Or [pricecharged]=810,50, IIf([pricecharged]=900,56, IIf([pricecharged]=1000,62.5, IIf([pricecharged]=1100,69, IIf([pricecharged]=1200 Or [pricecharged]=1210,75, IIf([pricecharged]=1300 Or [pricecharged]=1310,100, IIf([pricecharged]=1400,125, IIf([pricecharged]=1500,150, IIf([pricecharged]=1600,175, IIf([pricecharged]=1700,200, IIf([pricecharged]=1800,225, IIf([pricecharged]=1900,250,0)))))))))))))))))))

    Read the article

  • How to hide zero values in bar3 plot in MATLAB

    - by Doresoom
    I've got a 2-D histogram (the plot is 3D - several histograms graphed side by side) that I've generated with the bar3 plot command. However, all the zero values show up as flat squares in the x-y plane. Is there a way I can prevent MATLAB from displaying the values? I already tried replacing all zeros with NaNs, but it didn't change anything about the plot. Here's the code I've been experimenting with: x1=normrnd(50,15,100,1); %generate random data to test code x2=normrnd(40,13,100,1); x3=normrnd(65,12,100,1); low=min([x1;x2;x3]); high=max([x1;x2;x3]); y=linspace(low,high,(high-low)/4); %establish consistent bins for histogram z1=hist(x1,y); z2=hist(x2,y); z3=hist(x3,y); z=[z1;z2;z3]'; bar3(z) As you can see, there are quite a few zero values on the plot. Closing the figure and re-plotting after replacing zeros with NaNs seems to change nothing: close z(z==0)=NaN; bar3(z)

    Read the article

  • Perl cron job stays running

    - by Dylan
    I'm currently using a cron job to have a Perl script that tells my Arduino to cycle my aquaponics system and all is well, except the Perl script doesn't die as intended. Here is my cron job: */15 * * * * /home/dburke/scripts/hal/bin/main.pl cycle And below is my Perl script: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $pid = fork(); my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { if ($args eq "cycle") { my $stop = 0; sleep(1); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); while ($stop == 0) { sleep(2); } die "Done."; } else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } } Why wouldn't it die from the statement die "Done.";? It runs fine from the command line and also interprets the 'cycle' argument fine. When it runs in cron it runs fine, however, the process never dies and while each process doesn't continue to cycle the system it does seem to be looping in some way due to the fact that it ups my system load very quickly. If you'd like more information, just ask. EDIT: I have changed to code to: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C if ($args eq "cycle") { open (LOG, '>>log.txt'); print LOG "Cycle started.\n"; my $stop = 0; sleep(2); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; print LOG "Alarm is being set.\n"; alarm (420); print LOG "Alarm is set.\n"; while ($stop == 0) { print LOG "In while-sleep loop.\n"; sleep(2); } print LOG "The loop has been escaped.\n"; die "Done."; print LOG "No one should ever see this."; } else { my $pid = fork(); $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } }

    Read the article

  • Demangling typeclass functions in GHC profiler output

    - by Paul Kuliniewicz
    When profiling a Haskell program written in GHC, the names of typeclass functions are mangled in the .prof file to distinguish one instance's implementations of them from another. How can I demangle these names to find out which type's instance it is? For example, suppose I have the following program, where types Fast and Slow both implement Show: import Data.List (foldl') sum' = foldl' (+) 0 data Fast = Fast instance Show Fast where show _ = show $ sum' [1 .. 10] data Slow = Slow instance Show Slow where show _ = show $ sum' [1 .. 100000000] main = putStrLn (show Fast ++ show Slow) I compile with -prof -auto-all -caf-all and run with +RTS -p. In the .prof file that gets generated, I see that the top cost centers are: COST CENTRE MODULE %time %alloc show_an9 Main 71.0 83.3 sum' Main 29.0 16.7 And in the tree, I likewise see (omitting irrelevant lines): individual inherited COST CENTRE MODULE no. entries %time %alloc %time %alloc main Main 232 1 0.0 0.0 100.0 100.0 show_an9 Main 235 1 71.0 83.3 100.0 100.0 sum' Main 236 0 29.0 16.7 29.0 16.7 show_anx Main 233 1 0.0 0.0 0.0 0.0 How do I figure out that show_an9 is Slow's implementation of show and not Fast's?

    Read the article

  • Force windows video driver reload. Is it possible at all?

    - by somemorebytes
    Hi there, Some drivers use parameters written in the registry to configure themselves when they get loaded at boot time. I can modify those values and then reboot, but I would like to know if it is possible to force the driver reload, making the changes effective without rebooting. Specifically, I am talking about the video driver (nvidia). I read somewhere, that calling through pINvoke() [User32.ll]::ChangeDisplaySettings() with a 640x480x8bits resolution,(which is so low that it should not be supported by a modern driver) will force windows to load the "Standard VGA driver", and making another call with the current resolution will load the nvidia driver again. This does not work though. At least in Windows 7, even if the low res is not displayed as "supported" the system reduces the screen to a little square in the center of the screen, showing the low res wihtout unloading the nvidia driver. So, is there any .NET/Win32 API, service to restart, or any way at all to force a video driver reload? Perhaps programatically disabling the device (as you could do from the Device Manager) and reenabling it again? Any idea? Thanks a lot.

    Read the article

  • Export sheet from Excel to CSV

    - by Mike Wills
    I am creating a spread sheet to help ease the entry of data into one of our systems. They are entering inventory items into this spread sheet to calculate the unit cost of the item (item cost + tax + S&H). The software we purchased cannot do this. Aan invoice can have one or more lines (duh!) and I calculate the final unit cost. This is working fine. I then want to take that data and create a CSV from that so they can load it into our inventory system. I currently have a second tab that is laid out like I want the CSV, and I do an equal cell (=Sheet!A3) to get the values on the "export sheet". The problem is when they save this to a CSV, there are many blank lines that need to be deleted before they can upload it. I want a file that only contains the data that is needed. I am sure this could be done in VBA, but I don't know where to start or know how to search for an example to start. Any direction or other options would be appreciated.

    Read the article

  • STL vectors with uninitialized storage?

    - by Jim Hunziker
    I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values. Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements? (I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.) Also, see my comments below for a clarification about when the initialization occurs. SOME CODE: void GetsCalledALot(int* data1, int* data2, int count) { int mvSize = memberVector.size() memberVector.resize(mvSize + count); // causes 0-initialization for (int i = 0; i < count; ++i) { memberVector[mvSize + i].d1 = data1[i]; memberVector[mvSize + i].d2 = data2[i]; } }

    Read the article

  • parse a special xml in python

    - by zhaojing
    I have s special xml file like below: <alarm-dictionary source="DDD" type="ProxyComponent"> <alarm code="402" severity="Alarm" name="DDM_Alarm_402"> <message>Database memory usage low threshold crossed</message> <description>dnKinds = database type = quality_of_service perceived_severity = minor probable_cause = thresholdCrossed additional_text = Database memory usage low threshold crossed </description> </alarm> ... </alarm-dictionary> I know in python, I can get the "alarm code", "severity" in tag alarm by: for alarm_tag in dom.getElementsByTagName('alarm'): if alarm_tag.hasAttribute('code'): alarmcode = str(alarm_tag.getAttribute('code')) And I can get the text in tag message like below: for messages_tag in dom.getElementsByTagName('message'): messages = "" for message_tag in messages_tag.childNodes: if message_tag.nodeType in (message_tag.TEXT_NODE, message_tag.CDATA_SECTION_NODE): messages += message_tag.data But I also want to get the value like dnkind(database), type(quality_of_service), perceived_severity(thresholdCrossed) and probable_cause(Database memory usage low threshold crossed ) in tag description. That is, I also want to parse the content in the tag in xml. Could anyone help me with this? Thanks a lot!

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Where to find new Micro-BTX (uBTX) motherboards? Or should I just replace the box?

    - by John Rudy
    OK, so I'm guessing that it's dead. It's not my machine, and the owner is on a very fixed (IE, none) income. I'm generous, but I'm not that generous, since I already gave him what (at the time) was a fully functional and fairly well-equipped machine. (Aside from the mobo and proc, almost nothing else in it was stock. I'd taken it up to 3GB of RAM, upgraded the hard drive, added a decent video card, installed a wireless adapter, running Vista, etc.) According to further research, the machine uses a Micro-BTX (uBTX) motherboard, and since it's an AMD Athlon64, the AM2 socket. So I'm looking at a few options, and am wondering what's the best route to take? Find an AM2 socket uBTX mobo. I can't find them new online anywhere, leading me to believe that this is an obsolete form factor/chip combination. I don't want a refurb or a system pull because, quite honestly, once I deal with this mess, I don't want to go through it again in another year or two. Find an Intel uBTX mobo and a (relatively -- hah, I still want at least a dual-core) inexpensive Intel CPU. At this point, the only things stock in the machine would be the case and the PSU. :) Buy a bare-bones kit (mobo/proc/PSU/case, sometimes even RAM) from somewhere like CompUSA/TigerDirect or Fry's and move all of the other hardware over. This makes life difficult because the copy of Vista is an upgrade, tied to the copy of XP which shipped on the Gateway, which is OEM and won't install on the new box. :) If I change the CPU brand (AMD to Intel), will I need to reinstall Windows, or can it just be reactivated? Where can I actually find a new, in-box, not system pull, not refurb AM2 uBTX mobo? Do they even exist anymore? What kind of money are we talking (US dollars)? The end goal is to get the machine functional again as cheaply as humanly possible. If it were my own machine, I wouldn't even be asking this, I'd be custom-building a new one. However, it's not mine, I'm shelling out of pocket for the fix (plus the work), and thus want to keep that end price low-low-low.

    Read the article

  • Recover a lost IBM DS4300 SAN password?

    - by Daniel
    I have a pair of IBM DS4300 SAN units that I need to perform a firmware upgrade on. Unfortunately the admin passwords on these units have been lost and now need to be reset. I had hoped the default password of 'infiniti' would work but it seems that it must have been changed. I know the method here is to contact IBM but the cost of that phone call is ludicrous if someone out here knows what they're going to tell me to do. These units are out of warranty and the cost of support is beyond what we have the budget for. Is there an interrupt I can enter during the boot process (similar to a cisco password recovery) or is there a hardware/software tool I require? Please, if anyone has ever gone through the process of an IBM password recovery I’m asking for a little help. EDIT: Just to clarify, the password I need to reset is the one used for serial cable access and not for storage manager. Sorry if I have caused more confusion.

    Read the article

  • VPN for a small organization

    - by user24091
    I am in charge of a small office network that has < 10 users. I want to be able to offer them access to the network from their home internet connections. At the moment we have a regular ADSL-router-firewall to provide local network access and a fixed IP address. I know there are enterprise-level VPN solutions, but these obviously won't be available to us because of the cost and complexity. What small-scale solutions are around that you could recommend, what would we need to deploy on the client side, and what would the clients need to do to access the VPN? Simplicity and low cost need to be the keys here. Thanks

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

  • in HFT trading should I upgrade from Windows Server 2008 R2 to Windows Server 2012?

    - by javapowered
    I'm using HP DL360p Gen8 + Windows Server 2008 R2 for HFT trading. That means that every 10 microseconds is important for me. I do understand that if I need everything to be so fast I probably should consider using Linux. But in this post I want to compare only Windows Server 2008 R2 and Windows Server 2012. I've found in internet couple articles that suggest how to tune Windows Server 2012 for low latency http://msdn.microsoft.com/library/windows/hardware/jj248719 http://technet.microsoft.com/en-us/library/hh831415.aspx Most part of optimizations from these articles apply only to Windows Server 2012 and can not be used on Windows Server 2008 R2. So now I think that as I can optimize Windows Server 2012 for low latency, probaly I should upgrade? After optimizations how much faster windows server 2012 would be (ideally in microseconds :)?

    Read the article

  • Remote Desktop closes with Fatal Error (Error Code: 5)

    - by Swinders
    We have one PC (Windows XP SP3) that we can not log onto using a Remote Desktop session. Logging on to the PC directly (sitting in front of it using the connected keyboard and monitor) work fine. From a second PC (tried a number of different ones but all Windows XP SP3) I run 'mstsc' and type in PC name to connect to. This shows the login box which we can enter the correct login details and click OK. Within a few second we get an error: Title: Fatal Error (Error Code:5) Error: Your Remote Desktop session is about to end. This computer might be low on virtual memory. Close your other programs, and then try connecting to the remote computer again. If the problem continues, contact your network administrator or technical support. None of the computers we are using are low on memory (2Gb+) and we let windows manage the virtual memory side of things. We do not see this with any other PC and do use Remote Desktop in meeting rooms to connect to user PCs with no problems.

    Read the article

  • IIS7 URL Rewrite for URL with &rsquo;

    - by blizz
    I'm trying to redirect the following URL in IIS7 using the URL Rewrite module: Category/Cat-3/Objectives-of-Pre-Maintenance/WHAT&rsquo;S-THE-STORY-ON-LOW-CARB-DIETS-AND-EXERCISE-.aspx but for the life of me I can not get it. I tried replacing the &rsquo; with an actual single quote, and that worked fine! But for the purpose of this redirect, I need the HTML code for the quote, and not the quote itself. Here is the rule from web config: <rule name="Redirect" patternSyntax="ECMAScript" stopProcessing="true"> <match url="^Category/Cat-3/Objectives-of-Pre-Maintenance/WHAT&amp;rsquo;S-THE-STORY-ON-LOW-CARB-DIETS-AND-EXERCISE-\.aspx" /> <conditions> <add input="{HTTP_HOST}" pattern="www.example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/redirecturl/" appendQueryString="false" /> </rule> Any ideas would be very helpful! Thanks

    Read the article

  • Memory Troubles with UIImagePicker

    - by Dan Ray
    I'm building an app that has several different sections to it, all of which are pretty image-heavy. It ties in with my client's website and they're a "high-design" type outfit. One piece of the app is images uploaded from the camera or the library, and a tableview that shows a grid of thumbnails. Pretty reliably, when I'm dealing with the camera version of UIImagePickerControl, I get hit for low memory. If I bounce around that part of the app for a while, I occasionally and non-repeatably crash with "status:10 (SIGBUS)" in the debugger. On low memory warning, my root view controller for that aspect of the app goes to my data management singleton, cruises through the arrays of cached data, and kills the biggest piece, the image associated with each entry. Thusly: - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Low Memory Warning" message:@"Cleaning out events data" delegate:nil cancelButtonTitle:@"All right then." otherButtonTitles:nil]; [alert show]; [alert release]; NSInteger spaceSaved; DataManager *data = [DataManager sharedDataManager]; for (Event *event in data.eventList) { spaceSaved += [(NSData *)UIImagePNGRepresentation(event.image) length]; event.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(event.image) length]; } NSString *titleString = [NSString stringWithFormat:@"Saved %d on event images", spaceSaved]; for (WondrMark *mark in data.wondrMarks) { spaceSaved += [(NSData *)UIImagePNGRepresentation(mark.image) length]; mark.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(mark.image) length]; } NSString *messageString = [NSString stringWithFormat:@"And total %d on event and mark images", spaceSaved]; NSLog(@"%@ - %@", titleString, messageString); // Relinquish ownership any cached data, images, etc that aren't in use. } As you can see, I'm making a (poor) attempt to eyeball the memory space I'm freeing up. I know it's not telling me about the actual memory footprint of the UIImages themselves, but it gives me SOME numbers at least, so I can see that SOMETHING'S happening. (Sorry for the hamfisted way I build that NSLog message too--I was going to fire another UIAlertView, but realized it'd be more useful to log it.) Pretty reliably, after toodling around in the image portion of the app for a while, I'll pull up the camera interface and get the low memory UIAlertView like three or four times in quick succession. Here's the NSLog output from the last time I saw it: 2010-05-27 08:55:02.659 EverWondr[7974:207] Saved 109591 on event images - And total 1419756 on event and mark images wait_fences: failed to receive reply: 10004003 2010-05-27 08:55:08.759 EverWondr[7974:207] Saved 4 on event images - And total 392695 on event and mark images 2010-05-27 08:55:14.865 EverWondr[7974:207] Saved 4 on event images - And total 873419 on event and mark images 2010-05-27 08:55:14.969 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images 2010-05-27 08:55:15.064 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images And then pretty soon after that we get our SIGBUS exit. So that's the situation. Now my specific questions: THE time I see this happening is when the UIPickerView's camera iris shuts. I click the button to take the picture, it does the "click" animation, and Instruments shows my memory footprint going from about 10mb to about 25mb, and sitting there until the image is delivered to my UIViewController, where usage drops back to 10 or 11mb again. If we make it through that without a memory warning, we're golden, but most likely we don't. Anything I can do to make that not be so expensive? Second, I have NSZombies enabled. Am I understanding correctly that that's actually preventing memory from being freed? Am I subjecting my app to an unfair test environment? Third, is there some way to programmatically get my memory usage? Or at least the usage for a UIImage object? I've scoured the docs and don't see anything about that.

    Read the article

  • Thinking of Powerline adapters for networking

    - by Henderson McCoy
    will powerline work as good as wifi I am now in a large house (renting a room) and the router is in the front of the house and I am in the back. Don't know if wifi will have the range and stability so I am looking at all the alternatives from running a long cable to powerline. looks like a powerline adapter set will cost about $50 http://www.bestbuy.com/site/Actiontec---500-Mbps-Powerline-Home-Theater-Network-Adapter-Kit/5215483.p?id=1218625358741&skuId=5215483 has anyone use these or other powerline adapters? how is powerlines speed? Are they more stable then wifi? powerline cost is good i am just currious about the performance

    Read the article

  • linux ssh -X graphical applications will not start when system load is high

    - by Chrisv
    So I am using ssh -X to access a server. I am at a Xubuntu desktop accessing a Ubuntu server that is in the next room. Usually everything works fine, but when the system load gets high, any graphical applications I have freeze and fail to be restarted. This happens even if the process that is causing the high load has been niced to a low priority with "nice -n 19". And even though the system load is high, the command line works fine with no delay, and other applications I have running on the server (e.g. virtual machines) run fine. But any graphical application running through X dies. When the graphical applications fail they usually give out an error message that suggests a time-out. It seems that something connected to X has a low priority and times out. But what is it, and how does one fix it?

    Read the article

  • Lack of Virtual Memory - confusion

    - by Wesley
    Specs ahead of time: AMD Athlon XP 2400+ @ 2.00 GHz / 1 GB PC-3200 DDR RAM / 160 GB IDE HDD / 128 MB GeForce 6200 AGP / FIC AM37 / 350W PSU / Windows XP Pro SP3 So, XP has pop-ups saying that I am low on virtual memory. However, I have a program called SmartRAM (part of Advanced SystemCare by IObit) and it displays CPU & Page File Usage and Free Physical and Virtual Memory. That program shows that I have at least 2000 MB free virtual memory on my machine when the XP pop-ups say that I'm low on virtual memory. First off, would a complete lack of virtual memory cause my computer to freeze? Secondly, how can I solve this lack of virtual memory? (A complete reformat is possible but can't be done immediately...) Thanks in advance.

    Read the article

  • sound volume increase beyond 100% whenever possible on linux

    - by fakedrake
    Some audio output from files or streams is too low. It is obvious that hardware is able to play the same sounds but louder but because of the data it just plays it at some low level even at 100% volume. Vlc can generally increase the volume of a file up to 200%. Is there a way to do the same thing VLC does system-wide and if possible for an arbitrary v percentage value. If there is no application that does this, where should i look into for libs to do it myself or what code should i modify(eg code in the alsamixer) thank you

    Read the article

  • Strategy and AI for the game 'Proximity'

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal strategy then b) how to build an AI Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the high branching factor (starts out at 120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • An efficient setup of several VPSs on one box?

    - by Abs
    Hello all, I hope its ok to ask this question on serverfault, its not an actual fault but more of an implementation advice request. I would like to have a dedicated server that I can deploy my own VPSs on. These VPS will be various windows, Mac and Linux operating systems. I was thinking of buying a large Linux based dedicated server and then running VMWare Server or Virtualbox and adding my own images on there for each OS but I am thinking this isn't going to be cost effective and easy to maintain. I am hoping someone can help me with the perfect setup that is both cost effective and efficient so that I can have 6 VPS at my disposal that I can easily control. Thanks all for any help.

    Read the article

  • Raid0 performance degradation?

    - by davy8
    Not sure if this belongs here or on SuperUser, feel free to move as appropriate. I've noticed the performance on my RAID0 setup seems to have degraded over the past months. The throughput is fine, but I think the random access time has increased or something. In use I generally see about 1-5mb/sec when loading stuff in Visual Studio and other apps and it doesn't seem like the CPU is bottlenecking as the CPU utilization is pretty low. I don't recall what Access Time used to be, but HD Tune is reporting 12.6ms Read throughput is showing as averaging about 125MB/sec so it should be great for sequential reads. Defrag daily and it shows fragmentation levels low, so that shouldn't be an issue. Additional info, Windows 7 x64, Intel raid controller on mobo, WD Black 500GB (I think 32mb cache) x2.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >