Search Results

Search found 22794 results on 912 pages for 'bit cost'.

Page 158/912 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Should I be worried about overengineering programming assignments given during interview process?

    - by DormoTheNord
    I recently had a phone interview with a company. After that phone interview, I was told to complete a short programming assignment (a small program; shouldn't take more than three hours). I'm only directly instructed to complete the assignment and turn in the code. I was given complete freedom to use any language I wished and was not told exactly how to turn in the code. Immediately I planned on throwing it on Github, writing a test suite for it, using Travis-CI (free continuous integration for public Github repositories) to run the test suites, and using CMake to build the Linux makefiles for Travis-CI. That way, not only can I demonstrate that I understand how to use Git, CMake, Travis-CI, and how to write tests, but I can also simply link to the Travis-CI page so they can see the output of the tests. I figured that'd make it a tiny bit more convenient for the interviewer. Since I know those technologies well, it would add essentially no time to the assignment. However, I'm a bit worried that doing all this for a relatively simple task would look bad. Although it wouldn't add much more time at all for me, I don't want them thinking I spend too much time on things that should be simple.

    Read the article

  • Snort: not logging anything

    - by ethrbunny
    My site seems to be the target of quite a bit of probing over the last few months. In an attempt to get a better handle on this I installed SNORT on one of the machines that has external exposure. Something must not be installed correctly as I see lots of probing in /var/log/messages but snort isn't logging anything. System: CentOS 6.2 (32 bit) Snort: (latest build and rules) Snort configured from this v excellent site: http://nachum234.no-ip.org/security/snort/001-snort-installation-on-centos-6-2/ snort running as daemon: /usr/local/bin/snort -d -D -i bond0 -u snort -g snort -c /etc/snort.d/snort.conf -l /var/log/snort The snort.log file is empty despite hundreds (or more) failed login attempts from individual IP addresses. Maybe Im missing the purpose of SNORT? I was hoping it would log this sort of info.

    Read the article

  • Unity Plugin DLLNotFoundException

    - by Dewayne
    I am using a plugin DLL that I created in Visual C++ Express 2010 on windows 7 64 bit Ultimate Edition. The DLL functions properly on the machine that it was originally created on. The problem is that the DLL is not functioning in the Unity3d Editor on another machine and giving an error that basically states that the DLL is missing some of its dependencies. The target machine is running Windows 7 Home 64 bit (if this is relevant) Results from the error log of Dependency Walker: Error: The Side-by-Side configuration information for "c:\users\dewayne\desktop\shared\vrpnplugin\unityplugin\build\release\OPTITRACKPLUGIN.DLL" contains errors. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail (14001). Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. The Visual C++ Express 2010 project and solution file can be found here: https://docs.google.com/leaf?id=0B1F4pP7mRSiYMGU2YTJiNTUtOWJiMS00YTYzLThhYWQtMzNiOWJhZDU5M2M0&hl=en&authkey=CJSXhqgH The zip is 79MB and also contains its dependencies. The DLL in question is OptiTrackPlugin.dll

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.) So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Evoland: A Video Game About Video Game History

    - by Jason Fitzpatrick
    Browser-based Evoland is, hands down, one of the more clever video game concepts to come across our desk. The game itself is a history of video games–as you play the game the game evolves from a limited 8-bit monochrome adventure into a modern game. You start off unable to do anything but move right and collect a treasure chest. That treasure chest unlocks the left key (keys are configured in a WASD style keypad) which in turn allows you to move around a simple monochromatic forest clearing to unlock the rest of the movement keys. From there you begin unlocking more game features, effectively evolving the game from monochrome to 16 and then 64 bit color and unlocking various game play features. The game itself is short and can be played in about the same time you could watch a video covering the basics of various game changes over the last 30 years but actually playing the game and watching the evolution in progress is far more rewarding. Hit up the link below to take it for a spin. Evoland [via Boing Boing] How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Craig Mundie's video

    - by GGBlogger
    Timothy recently posted “Microsoft Shows Off Radical New UI, Could Be Used In Windows 8” on Slashdot. I took such grave exception to his post that I found it necessary to my senses to write this blog. We need to go back many years to the days of hand cranked calculators and early main frame computers. These devices had singular purposes – they were “number crunchers” used to make accounting easier. The front facing display in early mainframes was “blinken lights.” The calculators did provide printing – in the form of paper tape and the mainframes used line printers to generate reports as needed. We had other metaphors to work with. The typewriter was/is a mechanical device that substitutes for a type setting machine. The originals go back to 1867 and the keyboard layout has remained much the same to this day. In the earlier years the Morse code telegraphs gave way to Teletype machines. The old ASR33, seen on the left in this photo of one of the first computers I help manufacture, used a keyboard very similar to the keyboards in use today. It also generated punched paper tape that we generated to program this computer in machine language. Everything considered this computer which dates back to the late 1960s has a keyboard for input and a roll of paper as output. So in a very rudimentary fashion little has changed. Oh – we didn’t have a mouse! The entire point of this exercise is to point out that we still use very similar methods to get data into and out of a computer regardless of the operating system involved. The Altair, IMSAI, Apple, Commodore and onward to our modern machines changed the hardware that we interfaced to but changed little in the way we input, view and output the results of our computing effort. The mouse made some changes and the advent of windowed interfaces such as Windows and Apple made things somewhat easier for the user. My 4 year old granddaughter plays here Dora games on our computer. She knows how to start programs, use the mouse, play the game and is quite adept so we have come some distance in making computers useable. One of my chief bitches is the constant harangues leveled at Microsoft. Yup – they are a money making organization. You like Apple? No problem for me. I don’t use Apple mostly because I’m comfortable in the Windows environment but probably more because I don’t like Apple’s “Holier than thou” attitude. Some think they do superior things and that’s also fine with me. Obviously the iPhone has not done badly and other Apple products have fared well. But they are expensive. I just build a new machine with 4 Terabytes of storage, an Intel i7 Core 950 processor and 12 GB of RAMIII. It cost me – with dual monitors – less than 2000 dollars. Now to the chief reason for this blog. I’m going to continue developing software for as long as I’m able. For that reason I don’t see my keyboard, mouse and displays changing much for many years. I also don’t think Microsoft is going to spoil that for me by making radical changes to my developer experience. What Craig Mundie does in his video here:  http://www.ispyce.com/2011/02/microsoft-shows-off-radical-new-ui.html is explore the potential future of computer interfaces for the masses of potential users. Using a computer today requires a person to have rudimentary capabilities with keyboards and the mouse. Wouldn’t it be great if all they needed was hand gestures? Although not mentioned it would also be nice if computers responded intelligently to a user’s voice. There is absolutely no argument with the fact that user interaction with these machines is going to change over time. My personal prediction is that it will take years for much of what Craig discusses to come to a cost effective reality but it is certainly coming. I just don’t believe that what Craig discusses will be the future look of a Window 8.

    Read the article

  • Help implementing virtual d-pad

    - by Moshe
    Short Version: I am trying to move a player around on a tilemap, keeping it centered on its tile, while smoothly controlling it with SneakyInput virtual Joystick. My movement is jumpy and hard to control. What's a good way to implement this? Long Version: I'm trying to get a tilemap based RPG "layer" working on top of cocos2d-iphone. I'm using SneakyInput as the input right now, but I've run into a bit of a snag. Initially, I followed Steffen Itterheim's book and Ray Wenderlich's tutorial, and I got jumpy movement working. My player now moves from tile to tile, without any animation whatsoever. So, I took it a step further. I changed my player.position to a CCMoveTo action. Combined with CCfollow, my player moves pretty smoothly. Here's the problem, though: Between each CCMoveTo, the movement stops, so there's a bit of a jumpiness introduced between movements. To deal with that, I changed my CCmoveTo into a CCMoveBy, and instead of running it once, I decided to have it CCRepeatForever. My plan was to stop the repeating action whenever the player changed directions or released the d-pad. However, when the movement stops, the player is not necessarily centered along the tiles, as it should be. To correctly position the player, I use a CCMoveTo and get the closest position that would put the player back into the proper position. This reintroduces an earlier problem of jumpiness between actions. What is the correct way to implement a smooth joystick while smoothly animating the player and keeping it on the "grid" of tiles? Edit: It turns out that this was caused by a "Bug Fix" in the cocos2d engine.

    Read the article

  • Nervous about the "real" world

    - by Randy
    I am currently majoring in Computer Science and minoring in mathematics (the minor is embedded in the major). The program has a strong C++ curriculum. We have done some UNIX and assembly language (not fun) and there is C and Java on the way in future classes that I must take. The program I am in did not use the STL, but rather a STL-ish design that was created from the ground up for the program. From what I have read on, the STL and what I have taken are very similar but what I used seemed more user friendly. Some of the programs that I had to write in C++ for assignments include: a password server that utilized hashing of the passwords for security purposes, a router simulator that used a hash table and maps, a maze solver that used depth first search, a tree traveler program that traversed a tree using levelorder, postorder, inorder, selection sort, insertion sort, bit sort, radix sort, merge sort, heap sort, quick sort, topological sort, stacks, queues, priority queues, and my least favorite, red-black trees. All of this was done in three semesters which was just enough time to code them up and turn them in. That being said, if I was told to use a stack to convert an equation to infix notation or something, I would be lost for a few hours. My main concern in writing this is when I graduate and land an interview, what are some of the questions posed to assess my skills? What are some of the most important areas of computer science that are prevalent in the field? I am currently trying to get some ideas of programs I can write in C++ that interest and challenge me to keep learning the language. A sodoku solver came to mind but am lost as to where to start. I apologize for the rant, but I'm just a wee bit nervous about the future. Any tips are appreciated.

    Read the article

  • CPU's on Hyper-V host system is just idling, even though VM's are at full throttle

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 These three machines are running very sluggishly, they are at 100% cpu even though the host system is barely using any cpu at all, typically below 10% total. Could anyone please give some tips as to the best setup for CPU allocating? Should I have set each server to have two cores, or should I increase this number above the total number of cores on the host? What is a good number to set on the Virtual Machine Reserve and Virtual Machine Limit? Is 8 gigs of physical RAM insufficient for 3 VM's? Thanks for reading. :)

    Read the article

  • Windows 7 Easy Transfer does not start (Windows XP)

    - by Gerhard Weiss
    Windows 7 Easy Transfer does not start on my Windows XP system. I installed Windows 7 Easy Transfer and when I click to start it nothing happens. Not even an error message. I uninstalled and reinstalled a few times with the same results. I even tried starting it from C:\Program Files\Windows Easy Transfer 7\migwiz.exe with the same results. Nothing happens! Any one else run into this issue? Any suggestions on how to start it up differently? This is my work system so I do not know if they have it locked down. This is a Windows XP, Dell OptiPlex755 built about 3 years ago. I downloaded the 32-bit from this website: http://windows.microsoft.com/en-us/windows7/products/features/windows-easy-transfer and yes my system is 32-bit ;)

    Read the article

  • Convert MSDN Windows 7 Enterprise installation to Ultimate

    - by stimms
    Related question: When reinstalling Windows 7, does the language, version, architecture (64-bit or 32-bit) or source (OEM, retail, or MSDN) matter? Unthinkingly I installed Windows 7 Enterprise on my development box at work rather than the Ultimate version. Now windows is complaining that it is unable to activate because it can't contact a KMS (Key Management Service) which my company doesn't have. Is there any way to change this install such that it believes it is Windows 7 Ultimate and will take an ultimate key? I have MSDN so the licensing is not an issue.

    Read the article

  • Do I need path finding to make AI avoid obstacles?

    - by yannicuLar
    How do you know when a path-finding algorithm is really needed? There are contexts, where you just want to improve AI navigation to avoid an object, like a space -ship that won't crash on a planet or a car that already knows where to steer, but needs small corrections to avoid a road bump. As I've seen on similar posts, the obvious solution is to implement some path-finding algorithm, most likely like A*, and let your AI-controlled object to navigate through the path. Now, I have the necessary skills to implement a path-finding algorithm, and I'm not being lazy here, but I'm still a bit skeptical on if this is really needed. I have the impression that path-finding is appropriate to navigate through a maze, or picking a path when there are many alternatives. But in obstacle avoidance, when you do know the path, but need to make slight corrections, is path finding really necessary? Even when the obstacles are too sparse or small ? I mean, in real life, when you're driving and notice a bump on the road, you will just have to pick between steering a bit on the left (and have the bump on your right side) or the other way around. You will not consider stopping, or going backwards. A path finding would be appropriate when you need to pick a route through the city, right ? So, are there any other methods to help AI navigation, except path-finding? And if there are, how do you know when path-fining is the appropriate algorithm ? Thanks for any thoughts

    Read the article

  • Podcasting vs Stack Overflow vs Geekswithblogs

    - by MarkPearl
    For a few years now I have been looking for effective ways to be involved in the “community”. While there are a few community programming events in my area (Johannesberg), there isn’t too much face to face stuff – which has caused me to turn to the internet. My internet attempts have been varied – at first I took the passive approach of listening to tech podcasts. This was great for a while, but soon the content became semi-repetitive and a little boring. It seemed that the podcasts I was listening to all went round the same themes and speakers and while I am still a keen listener to several tech podcasts – it didn’t quench my thirst. So I began to be a bit more active – starting with stack overflow – where I would scan the site for questions that were in the realm of my ability to answer. It worked for a while but soon it began to be discouraging – there seems to be so many people that know so much more than me and are quicker at typing that I felt fairly ineffective. So while I still use Stack Overflow when I am in a pickle and need some help – it feels more like me taking from the community than giving anything. Which brought me to Geeks with blogs. Till I found GWB I hadn’t felt like I was an active part of a community. I had blogged before on Blogspot and Wordpress but hadn’t felt associated to the community. Now when I get a comment from someone on one of my GWB posts either thanking me or adding a bit more or correcting me, it makes me feel like I am contributing to a community. So well done GWB. Thanks for making a spot that makes me feel at home!

    Read the article

  • Debian as USB hardware portable as possible

    - by James Mitch
    I have recent hardware, 64 bit, pae and so on. But I'd like to have my Debian installation on a USB HDD. Installing Debian to USB is solved. I used the i386 architecture image. But a pae kernel has been installed. I want to be able to travel with my USB HDD and therefore I want best possible hardware compatibility. My friends and family have sometimes older hardware, but always i386, just sometimes without 64 bit or pae. Never met someone with sparc or other architectures. What should I do to get non-pae kernel and maximum hardware compatibility?

    Read the article

  • Why does integrity check fail for the 12.04.1 Alternate ISO?

    - by mghg
    I have followed various recommendations from the Ubuntu Documentation to create a bootable Ubuntu USB flash drive using the 12.04.1 Alternate install ISO-file for 64-bit PC. But the integrity test of the USB stick has failed and I do not see why. These are the steps I have made: Download of the 12.04.1 Alternate install ISO-file for 64-bit PC (ubuntu-12.04.1-alternate-amd64.iso) from http://releases.ubuntu.com/12.04.1/, as well as the MD5, SHA-1 and SHA-256 hash files and related PGP signatures Verification of the data integrity of the ISO-file using the MD5, SHA-1 and SHA-256 hash files, after having verified the hash files using the related PGP signature files (see e.g. https://help.ubuntu.com/community/HowToSHA256SUM and https://help.ubuntu.com/community/VerifyIsoHowto) Creation of a bootable USB stick using Ubuntu's Startup Disk Creator program (see http://www.ubuntu.com/download/help/create-a-usb-stick-on-ubuntu) Boot of my computer using the newly made 12.04.1 Alternate install on USB stick Selection of the option "Check disc for defects" (see https://help.ubuntu.com/community/Installation/CDIntegrityCheck) Steps 1, 2, 3 and 4 went without any problem or error messages. However, step 5 ended with an error message entitled "Integrity test failed" and with the following content: The ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default file failed the MD5 checksum verification. Your CD-ROM or this file may have been corrupted. I have experienced the same (might only be similar since I have no exact notes) error message in previous attempts using the 12.04 (i.e. not the maintenance release) Alternate install ISO-file. I have in these cases tried to install anyway and have so far not experienced any problems to my knowledge. Is failed integrity check described above a serious error? What is the solution? Or can it be ignored without further problems?

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • LibGDX Boid Seek Behaviour

    - by childonline
    I'm trying to make a swarm of boids which seek out the mouse position and move towards it, but I'm having a bit of a problem. The boids just seem to want to go to upper-right corner of the game window. The mouse position seems influence the behavior a bit, but not enough to make the boid turn towards it. I suspect there is a problem with the way LibGDX handles its coordinate system, but I'm not sure how to fix it I've uploaded the eclipse project here! Also here are the relevant bits of my code, in case you see something obviously wrong: public Agent(){ _texture = GdxGame.TEX_AGENT; TextureRegion region = new TextureRegion(_texture, 0, 0, 32, 32); TextureRegion region2 = new TextureRegion(GdxGame.TEX_TARGET, 0, 0, 32, 32); _sprite = new Sprite(region); _sprite.setSize(.05f, .05f); _sprite_target = new Sprite(region2); _sprite_target.setSize(.1f, .1f); _max_velocity = 0.05f; _max_speed = 0.005f; _velocity = new Vector2(0, 0); _desired_velocity = new Vector2(0, 0); _steering = new Vector2(0, 0); _position = new Vector2(-_sprite.getWidth()/2, -_sprite.getHeight()/2); _mass = 10f; } public void Update(float deltaTime){ _target = new Vector2(Gdx.input.getX(), Gdx.input.getY()); _desired_velocity = ((_target.sub(_position)).nor()).scl(_max_velocity,_max_velocity); _steering = ((_desired_velocity.sub(_velocity)).limit(_max_speed)).div(_mass); _velocity = (_velocity.add(_steering)).limit(_max_speed); _position = _position.add(_velocity); _sprite.setPosition(_position.x, _position.y); _sprite_target.setPosition(Gdx.input.getX(), Gdx.input.getY()); } I've used this tutorial here. Thanks!

    Read the article

  • Slow network file access with VMWare Server and Windows 7

    - by garethm
    I have Windows 7 32-bit virtual machine running in a VMWare Server virtual machine. The host is Windows 7 64-bit. When I copy files between them it is extremely slow - it will take several minutes to copy even a 1 MB file. I can upload the file to an website and then download it again almost instantaneously by comparison. Browsing the network is quite zippy and has no problems. I have been unable to set up a HomeGroup between the computers though - the guest always times out without managing to get setup. Any ideas on how I should go about tracking down where the problem is?

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: “The default web application could not be determined.”

    - by Jan Tielens
    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of the desired site and retry activation.Parameter name: properties A little bit of searching on the internet taught me that I was not the only one having this issue, actually Paul Andrew describes how to solve it in this post. Although Paul describes what to do, his explanation is not, let’s say, very elaborate. :-) So let’s describe the steps a little bit more in detail: Create a new Business Data Connectivity Model project in Visual Studio 2010 and (optionally) implement all your code, change the model etc. When you try to deploy you get the error mentioned above. To fix it, in the Solution Explorer, navigate to and open the Feature1.Template.xml file (the name could be different if you decided to give your feature a different name of course). Add the following XML in the Feature element that’s already there (replace the Value with the URL of your site of course):  <Properties>    <Property Key='SiteUrl' Value='http://spf.u2ucourse.com'/>  </Properties>The resulting XML should look like:<?xml version="1.0" encoding="utf-8" ?><Feature xmlns="http://schemas.microsoft.com/sharepoint/">  <Properties>    <Property Key='SiteUrl' Value='http://spf.u2ucourse.com'/>  </Properties></Feature> Deploy the solution, now without any issues. :-) What happens now, is that when Visual Studio creates the SharePoint Solution (the WSP file), it will use the Feature template XML to generate the Feature manifest, which will now include the missing property.

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don’t Wait Much Longer

    - by Evelyn Neumayr
    By Kerrie Foy, Director PLM Product Marketing, Oracle Depending on severity, product compliance issues can cause various problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS), a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment, was originally adopted by the European Union in 2003 for implementation in 2006 and has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey. In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014. Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives. Additional regulations are expected from organizations such as the Food and Drug Administration in the US and similar organizations elsewhere. Meeting compliance requirements and also successfully investing in eco-friendly designs can be a major challenge. It may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes.  Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming.  However, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and thrive by implementing systematic approaches to product compliance that are more than functional bandages, they are revenue-generating engines. Consider working with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase.  Agile PG&C is a comprehensive solution that makes product compliance per corporate initiatives and regulations more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, gain rapid visibility into non-compliance issues, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise-wide systematic approach to product compliance is a competitive investment. From the start, Agile PG&C enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >