Search Results

Search found 10789 results on 432 pages for 'cpu upgrade'.

Page 332/432 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Oracle - Trigger to check constraint before insert

    - by user1816507
    i would like to create a simple trigger to check a stored variable from a table. if the value of the variable is '1', then approve the insertion else if the value of the variable is '2', then prompt error message. CREATE OR REPLACE TRIGGER approval BEFORE INSERT ON VIP REFERENCING OLD AS MEMBER FOR EACH ROW DECLARE CONDITION_CHECK NUMBER; BEGIN SELECT CONDITION INTO CONDITION_CHECK FROM MEMBER; IF CONDITION_CHECK = '2' THEN RAISE_APPLICATION_ERROR (-20000, ' UPGRADE DENIED!'); END IF; END; But this trigger disable all the entries even when the condition value is '1'.

    Read the article

  • Android image scaling to support multiple resolutions

    - by tyuo9980
    I've coded my game for 320x480 and I figure that the easiest way to support multiple resolutions is to scale the end image. What are your thoughts on this? Would it be cpu efficient to do it this way? I have all my images placed in the mdpi folder, I'll have it drawn unscaled on the screen onto a buffer, then scale it to fit the screen. all the user inputs will be scaled as well. I have these 2 questions: -How do you draw a bitmap without android automatically scaling it -How do you scale a bitmap?

    Read the article

  • TeamCity's build agent unable to find PowerShell

    - by cincura.net
    I have TC's build agent installed on Windows 2008 R2 SP1 Core. The server has PowerShell 2.0 installed (double checked, and actually from PS downloaded the TC installation). Looking at some build configuration I see these being incompatible with this agent, because powershell_x86/powershell_x64 is required. I tried deleting build agents dirs to force upgrade, but no luck. Interestingly if I provide powershell_x86, powershell_x86_Path (and for 64bit) variables into config file manually, everything runs fine. Is there anything I can do to have the build agent find PowerShell automatically? What/where is it looking for it? Maybe the 'Core' is problem.

    Read the article

  • Why There is a difference between assembly languages like Windows, Linux ?

    - by mcaaltuntas
    I am relatively new to all this low level stuff,assembly language.. and want to learn more detail. Why there is a difference between Linux, Windows Assembly languages? As I understand when I compile a C code Operating system does not really produce pure machine or assembly code, it produces OS dependent binary code.But why ? For example when I use a x86 system, CPU only understands x86 ASM am I right?.So Why we dont write pure x86 assembly code and why there are different assembly variations based on Operating system? If we would write pure ASM or OS produce pure ASM there wouldn't be binary compatilibty issues between Operating systems or Not ? I am really wondering all reasons behind them. Any detailed answer, article, book would be great. Thanks.

    Read the article

  • Lookup function not working (RS SP2)

    - by Al Reyes
    Hi, I made the upgrade to SP2. I'm trying to use the Lookup function to link data from two different servers. I'm trying first a simple exercise linking data from two datasets from the same server, having one dataset with journals and the other with the account description. My Expression looks like this at a field on the table I have: =Lookup(Fields!ACTINDX.Value,Fields!ACTINDX.Value,Fields!ACTDESCR.Value,"ACCTINFO") I made sure of the names and using only uppercases for datasets and fields but I'm receiving the following message when I try to preview: "An error occurred during local report processing. The definition of the report '/DETAIL' is invalid. The Value expression for the text box 'ACTINDX' refers to the field 'ACTDESCR'. Report item expressions can only refer to fields within the current dataset scope or, if inside an aggregate, the specified dataset scope". I'll appreciate any suggestions. Regards, Al

    Read the article

  • Click-Once deployment is leaving multiple versions (yes, more than 2)

    - by Clyde
    I've got a click once application that is leaving all old versions on my disk. It's an internal corporate application that gets frequent updates, so this is a disaster for rapidly inflating our backup size. According to the docs and other SO questions, it is supposed to only leave the current and previous versions on disk. However, each time I deploy the project and upgrade a client, I get another copy of all exe/dll/data files. I'm making no changes whatsoever to the application, just pushing deploy again in Visual Studio. Any ideas? Updates: The problem seems to happen on both Windows 7 and XP. 64 bit windows and 32. I've done a diff of the folders where the version is installed and the following files are different: MyApp.exe.manifest MyApp.exe.cdf-ms MyDll1.cdf-ms MyDll2.cdf-ms No actual executable files are different, nor the MyApp.manifest, MyDll1.manifest, etc.

    Read the article

  • C++/CLI Missing MSVCR90.DLL

    - by Mitch
    I have a c++/cli dll that I load at runtime and which works great in debug mode. If I try and load the dll in release mode it fails to load stating that one or more dependencies are missing. If I run depends against it I am missing MSVCR90.DLL from MSVCM90.DLL. If I check the debug version of the dll it also has the missing dependency, but against the debug (D) version. I have made sure debug/release embed the manifest file. I read something about there being issues with the app loading the dll being build as Any CPU and the dll being built as x86, but I don't see how to set them both to x86. I am using VS2010. Anyway, I've been messing around for a while now and have no idea what is wrong. I'm sure someone out there knows what is going on. Let me know if I need to include additional info.

    Read the article

  • What's your development setup? (Talking right now to my boss)

    - by Flinkman
    How do I tell my boss, that I need endless cpu power to automate my daily job? By the way, what's your setup, now in sep, 2008. How fast disks? How much memory? How many cores? How big screen? (Ok, what the hell are you doing, you may ask. I'm working in multiple environments, vmware. Have couple of build-systems running, for compatibility tests. These build systems are automated. The setup of the build system is also. Is there an another way?) Thanks!

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • Amazon EC2 prices for Windows Instance?

    - by Abhishek Gupta
    Hello Guys , I want to ask from some Amazon cloud technology Experts , that is it profitable to deploy our web application on amazon cloud as compared to normal server? Currently there are micro,small, large and other types of instances available , if we start from micro instance then we realize that our app needs some more CPU cycle and Ram then how can we dynamically move to next more powerful instance automatically at runtime. What is the approx minimum yearly cost for a single EC2 windows small instance? I wnat to deploy a simple Online quiz application (ASP.net based) on Amazon Cloud which at a time can have maximum of 500 users only. Please suggest me as I m very new to Cloud .Should I go for Azure or Amazon?

    Read the article

  • list python package dependencies without loading them ?

    - by Denis
    Say that python package A requires B, C and D; is there a way to list A → B C D without loading them ? Requires in the metadata (yolk -M A) are often incomplete, grr. One can download A.tar / A.egg, then look through A/setup.py, but some of those are pretty gory. (I'd have thought that getting at least first-level dependencies could be mechanized; even a 98 % solution would be better than avalanching downloads.) A related question: pip-upgrade-package-without-upgrading-dependencies

    Read the article

  • Need ideas for reprocessing images using attachment_fu

    - by cswebgrl
    Hi, I discovered a bug in my Rails app due to Rails app and gems upgrades and undocumented code from the previous developers. I have a lot of images that have been processed, but not sized correctly using attachment_fu. All of the images that were uploaded since the upgrade need to be resized correctly. Does anyone have any ideas to reprocess all of the images within the folders and resize them to the correct sizes? I'd hate to have to do these all manually. THANKS!! Cindy

    Read the article

  • openmp vs opencl for computer vision

    - by user1235711
    I am creating a computer vision application that detect objects via a web camera. I am currently focusing on the performance of the application My problem is in a part of the application that generates the XML cascade file using Haartraining file. This is very slow and takes about 6days . To get around this problem I decided to use multiprocessing, to minimize the total time to generate Haartraining XML file. I found two solutions: opencl and (openMp and openMPI ) . Now I'm confused about which one to use. I read that opencl is to use multiple cpu and GPU but on the same machine. Is that so? On the other hand OpenMP is for multi-processing and using openmpi we can use multiple CPUs over the network. But OpenMP has no GPU support. Can you please suggest the pros and cons of using either of the libraries.

    Read the article

  • How to implement HeightMap smoothing using Thrust

    - by igal k
    I'm trying to implement height map smoothing using Thrust. Let's say I have a big array ( around 1 million floats). A known graphics algorithm to implement the above problem is to calculate the average around a given cell. If for example I need to calculate the value at a given cell[i,j] what I will basically do is: cell[i,j] = cell[i-1,j-1] + cell[i-1,j] + cell[i-1,j+1] + cell[i,j-1] + cell[i,j+1] + cell[i+1,j -1] + cell[i+1,j] + cell[i+1,j+1] cell[i,j] /=9 That's the CPU code. Is there a way to implement it using thrust? I know that I could use the transform algorithm. But I'm not sure it's correct to access different cells which are occupied but different threads (banks conflicts and so on).

    Read the article

  • browser showing half progress bar even after page is fully rendered

    - by Rama
    Hi, what could be the reason if browser is showing progress bar (stuck at half) as if it is still trying to load something, even after the page is rendered. this is an intranet ASP.NET website. how can I find out the reason? the browser is IE8. actually this started after the browser is upgraded from IE6 to IE8. not sure if this issue has anything to do with browser upgrade. will the tools like Fiddler can help to find out what it is still trying to load? thanks in advance.

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • What is the fastest way to get the persisted object after calling Hibernate's saveOrUpdate?

    - by Dave
    I'm using Hibernate 3.2.1.ga, hibernate annotations 3.2.1.ga, and hibernate-jpa-2.0-api. I can't upgrade at this time as I'm working with legacy code. I have this generic method for saving or updating objects ... protected void saveOrUpdate(Object obj) { final Session session = sessionFactory.getCurrentSession(); session.saveOrUpdate(obj); } You can assume that every argument, "obj," will have a member field that is marked with the "@Id" annotation. I would like to change the return type to return an Object that represents the persisted object in the database (meaning if "obj" didn't contain an id before, what is returned is the database object with a populated id. What is the fastest way to do this given my versioning and generic constraints?

    Read the article

  • R problems using rpart with 4000 records and 13 attributes

    - by josh
    I have attempted to email the author of this package without success, just wondering if anybody else has experienced this. I am having an using rpart on 4000 rows of data with 13 attributes. I can run the same test on 300 rows of the same data with no issue. When I run on 4000 rows, Rgui.exe runs consistently at 50% cpu and the UI hangs.... it will stay like this for at least 4-5hours if I let it run, and never exit or become responsive. here is the code I am using both on the 300 and 4000 size subset : train<-read.csv("input.csv",header=T) y<-train[,18] x<-train[,3:17] library(rpart) fit<-rpart(y~.,x) Is this a known limitation of rpart, am I doing something wrong? potential workarounds? any assistance appreciated

    Read the article

  • Visual Studio 2010 Database Project does not understand Schema Names anymore?

    - by Xenan
    I just tried to upgrade a Visual Studio 2008 Database project to VS2010 and actually it is quite a mess. Hundreds of warnings, all unsolved references. It seems to boil down to Visual Studio not to understand Schema Names (aka Ownership) anymore. For example, the standard dbo schema: [$(MyDataBase)].dbo.MyTable is fine but: [$(MyDataBase)].myschema.MyTable gives an unsolved reference. It did work in VS2008. Also the abbreviation for dbo, the double dot: [$(MyDataBase)]..MyTable Doesn't work anymore. In the project property windows I restored the references to the correct servers (which were lost after the conversion) but that didn't help. This seems pretty basic but I don't have a clue how to solve this. Any help is appreciated.

    Read the article

  • List of phones that will work with Eclipse?

    - by user1058647
    I need an android phone to test my apps with that will work with Eclipse. It has to be low cost, run Gingerbread with modest memory and CPU. Thinking that any android phone would work I recently purchased a Virgin Mobil Chaser but as it turns out, it cannot be seen by either Eclipse or adb (but device manager does see the phone). Another developer has also had the same identical problem with the Chaser. I could keep buying phones and see if they work but that could be long and frustrating. I hope to find a "no contract" phone. Is there any list of phones that work with Eclipse. Does anyone know of any other Virgin Mobil phones that will work? thanks, Gary

    Read the article

  • What alternatives are there to Google App Engine?

    - by Chris Marasti-Georg
    What alternatives are there to GAE, given that I already have a good bit of code working that I would like to keep. In other words, I'm digging python. However, my use case is more of a low number of requests, higher CPU usage type use case, and I'm worried that I may not be able to stay with App Engine forever. I have heard a lot of people talking about Amazon Web Services and other sorts of cloud providers, but I am having a hard time seeing where most of these other offerings provide the range of services (data querying, user authentication, automatic scaling) that App Engine provides. What are my options here?

    Read the article

  • Where are Wireless Profiles stored in Ubuntu [closed]

    - by LonnieBest
    Where does Ubuntu store profiles that allow it to remember the credentials to private wireless networks that it has previously authenticate to and used? I just replaced my Uncle's hard drive with a new one and installed Ubuntu 10.04 on it (he had Ubuntu 9.10 on his old hard drive. He is at my house right now, and I want him to be able to access his private wireless network when he gets home. Usually, when I upgrade Ubuntu, I have his /home directory on another partition, so his wireless profile to his own network persists. However, right now, I'm trying to figure out which .folder I need to copy from his /home/user folder on the old hard drive, to the new hard drive, so that he will be able to have wireless Internet when he gets home. Does anyone know with certainty, exactly which folder I need to copy to the new hard drive to achieve this?

    Read the article

  • Filter Warnings In Error List Windows In Visual Studio 2010

    - by Chuck Haines
    I just recently got the approval to upgrade our project from .NET 1.1. to .NET 4. I loaded up the project in Visual Studio 2010 and I've got it compiled and working. However as is to be expected there are over 3000 warnings I need to start looking at and handling. The problem is this solution has about 20 projects in it. So what I'd like to be able to do is filter the warnings on project. So I could say only show warnings for this project. Does anyone know if this is possible in Visual Studio 2010 or if there is an add-on I can add?

    Read the article

  • c++ programming for clusters and HPC

    - by Abruzzo Forte e Gentile
    HI All I need to write a scientific application in C++ doing a lot of computations and using a lot of memory. I have part of the job but due to high requirements in terms of resources I was thinking to start moving to OpenMPI. Before doing that I have a simple curiosity: If I understood the principle of OpenMPI is the developer that has the task of splitting the jobs over different nodes calling SEND and RECEIVE based on node available at that time. Do you know if it does exist some library or OS or whatever that has this capability letting my code reamain as it is now? Basically something that connects all computers and let share as one their memory and CPU? I am a bit confused because of the high material available on the topic. Should I look at cloud computing? or Distributed Shared Memory? Can you help me or address me a bit? Thanks

    Read the article

  • Specify sorting order for a GROUP BY query to retrieve oldest or newest record for each group

    - by Beau Simensen
    I need to get the most recent record for each device from an upgrade request log table. A device is unique based on a combination of its hardware ID and its MAC address. I have been attempting to do this with GROUP BY but I am not convinced this is safe since it looks like it may be simply returning the "top record" (whatever SQLite or MySQL thinks that is). I had hoped that this "top record" could be hinted at by way of ORDER BY but that does not seem to be having any impact as both of the following queries returns the same records for each device, just in opposite order: SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created DESC SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created ASC Is there another way to accomplish this? I've seen several somewhat related posts that have all involved sub selects. If possible, I would like to do this without subselects as I would like to learn how to do this without that.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >