Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 754/1338 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • Hyper-V shutdown/start up host causing issues in VMs

    - by Colin Desmond
    I have a single physical host, running 2008 R2 with Hyper-V, all fully updated. On that host I am running 3 clients, a DC, a web server and a SQL Server 2008 R2 SP1. All are running on Windows Server 2008 R2, again all fully patched. Generally all is fine, but sometimes and not repeatably, when I shutdown the host properly, which suspends the Clients, when it all comes up again, SQL Server is no longer running and the IIS App Pool I am running the website in needs to be told which account to run in again! Any ideas!?

    Read the article

  • BusEnum2 and a Minor Bug Fix

    - by Kate Moss' Open Space
    The default root bus driver, BusEnum, enumerate and active drivers one by one in synchronized manner. It is not only slowing the boot time but in the even if any of driver's init function (XXX_init) get hanged, the whole system won't boot at all. There is a sample of enhanced root bus driver, BusEnum2, on the http://msdn.microsoft.com/en-us/library/dd187254.aspx The page provides the sample code and the detail explanation of the design concept. With multi-threaded BusEnum2 on CE7 with SMP enabled system, the scalability is even more significant. Since you have more than one processor and it can load drivers in parallel! Everything looks good so far, except to there is a small bug in the sample code. Fortunately, it is easy to fix. But hard to trace if you ever enc outer it! The BUSENUM2 flag only defined in BUSENUM2\BUSDEF\sources but not in BUSENUM2\BUSENUM\sources. The DeviceFolder is implemented in BUSENUM2\BUSDEF but the instance is created in BUSENUM2\BUSENUM\busenum.cpp, so the result is it allocates less memory than actual need.   Add   CDEFINES=$(CDEFINES) -DBUSENUM2   into BUSENUM2\BUSENUM\sources and the problem fixed!

    Read the article

  • What is the minimum delay between two consecutive RS232 frames?

    - by Lord Loh.
    I have been working on creating a UART on an FPGA. I can successfully transmit and receive single characters typed on PuTTY. However, when I set my FPGA to constantly write a large sequences of "A", sometimes I end up with a sequences of "@" or some other characters until I reset the FPGA a few times. I believe the UART on the computer looses track of the difference between the start bit and a zero. The delay between the two "A" is ~ 30us (measured with a logic analyzer) and the baud rate is 115200 8N1. Is there a minimum delay that must be maintained between two consecutive RS232 frames?

    Read the article

  • What technical test should I give to a job candidate

    - by Romain Braun
    I'm not sure if this is the right stackexhange website, but : I have three candidates coming in tomorrow. One has 15 years of experience in PHP, and the two others have about 1 year of experience in PHP/ frontend development. For the last ones I was thinking about a test where they would have to develop a web app allowing users to manage other users, as in : Display a list of users, display a single user, modify an user, and add extended properties to an user. This way it would feature html, css, js, ajax, php and SQL. Do you think this would be a good test? What test should I give to the first one? He needs something much more difficult, I guess. I'm also listening, if you have any advice/ideas about what makes a good developer, and what I should pay attention to in the guys' codes. I was also considering thinking outside of the box, more algorithm-related, and asked him to make the fastest function to tell if a number is a prime number, because there are a lot of optimizations you can apply to such a function. They have one day to do it.

    Read the article

  • Trouble using Ray.Intersect method on bounding boxes in a 2D XNA game

    - by getsauce
    I am trying to use a ray and bounding box to determine if a box is between the player and the mouse pointer in 2D space. When I try testing the code, the collision will return true when pointed at the box but it also returns true under other circumstances where it shouldn't. For instance. If I have a player on the left and a box directly to the right, I can put the mouse pointer a few hundred pixels above the box or a few hundred below and it will still return true. Also, I can put my mouse pointer to the left of the player and in a certain area it will still return true. Does anyone have any idea what might cause this? I have left out definitions for some of my members and properties just to make this code sample easier to read. The position property is just a Vector2 for where each object is located. ray = new Ray(new Vector3(player.Position, 0), new Vector3(mouse.Position, 0); box = new BoundingBox(new Vector3(box.Position, 0), new Vector3( new Vector2(box.Position + box.Width, box.Position + box.Height), 0); if (ray.Intersects(box) != null) collision = true; else collision = false;

    Read the article

  • capture nimbuzz traffic

    - by lurscher
    I need to capture all the traffic, specially during login, between nimbuzz pc client and nimbuzz server. The reason is that i need to debug outgoing packets at login that mark the user visibility status in order to reproduce them in a in-house XMPP client application I've tried doing this with wireshark, but i seem to be pretty helpless with this tool. Also, the packets i've been able to see are all before the SASL negotiation happens, after that, i cannot see the xml packets being exchanged any help for how to achieve this task is greatly appreciated (preferably on Windows, since there is no nimbuzz client for linux, in any case i can install one in a VM and monitor the traffic between the VM instance in the linux host)

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • Staggering java linux process startup to prevent OOM

    - by ctennis
    I am running a number of java processes on a single Linux machine. From a memory and computing standpoint, everything is fine when things are static. However, periodically we use a configuration management package up upgrade the jar or war files, and restart the java process. The problem is, that is restarts them all relatively quickly, and so we get 10 or so java VMs restarting all at the same time (we use daemontools for the service stops/starts), which wreaks havoc on the machine, in terms of OOMs or just really slow. This is because it's spawning the JVM 10x at the same time. Other than trying to stagger the startups, is there a smarter way of handling this? Maybe a sysctl tuning performance parameters, or a JVM parameter?

    Read the article

  • How to create multiple OS on same DVD [duplicate]

    - by learner
    This question already has an answer here: How to make a multiboot CD that will start a user-chosen ISO file 7 answers I searched this forum but there are only general answers which doesn't give me desired output. Here is what I want to do. I have (1). Windows 7 ISO (2). Windows 8.1 previews ISO and (3). Ubuntu 12.10 ISO files. Using which I want to create single bootable DVD, so that after creating DVD it should ask to choose to install between 3 OS. Is it possible? If so please help me.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • Cursor seems to freeze in the first attempt of typing - Unity 3D, 12.04

    - by Denis
    It happens in the first attempt of typing, no matter is after the startup, or 5 minutes later, or then after. The cursor (or maybe it's the system) seems to freeze, no matter the application I use, taking up 5 sec to appear what is typed. Subsequently, everything is normal, using another applications. @Anwar Shah suggested it could be a daemon waiting to run before the lauching of the first application. Turning off Zeitgest didn't help. It occurs only with Unity-3d. Tested with Unity-2d, everything is fine. Tried to change some Compiz settings, nothing worked, although not tested with every single parameter. Also I deactivated Ati proprietary driver, no effect. My system: AMD E350 1.6Gh, 2G-Ram, ATI graphics - Ubuntu 12.04, 64bits. Update 1: the cursor is blinking normally before I start typing. After the first character (which is not showed), seems to freeze, taking 5 seconds to get normal again. Very annoying, specially when you want to access login sites. Update 2: I tested on a different and old machine (Athlon 64 4800 x2, 4Gb ram, no problems - takes 2 seconds, acceptable. I think it could be related to my specific hardware (Samsung RV415), but not sure about it. Anyone experiencing something similar? Is that what I should expect, or can be fixed or improved? Thanks.

    Read the article

  • Why Swipe left doesn't work? [on hold]

    - by Hitesh
    I wrote the below code to detect and perform a sprite action on the single tap and swipe right event. @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { float x = 0F; int tapCount = 0; boolean playermoving = false; // TODO Auto-generated method stub if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_MOVE) { if (pSceneTouchEvent.getX() > x) { playermoving = true; players.runRight(); } if (pSceneTouchEvent.getX() < x) { Log.i("Run Left", "SPRITE Left"); } /* * if (pSceneTouchEvent.getX() < x) { System.exit(0); * Log.i("SWIPE left", "SPRITE LEFT"); } */ } if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_DOWN) { playermoving = false; x = pSceneTouchEvent.getX(); tapCount++; Log.i("X CORD", String.valueOf(x)); } if (pSceneTouchEvent.isActionDown()) { if (tapCount == 1 && playermoving != true) { tapCount = 0; players.jumpRight(); } } return true; } The code works fine. The only problem is that the swipe left event is not being detected due to some reasons. What can i do to make the swipe left action work? Please help

    Read the article

  • Is it possible to add hardware on a remote machine and use it as if it was installed on the local machine?

    - by that0th3rGuy
    Probably a silly question. We have two computers in the apartment running Windows 7 and Windows Vista. 99% of the time we use headphones but every now and then we have a desire for some ambiance so we got a set of 2.1 speakers. But now only one of us has access to the speakers unless we move them around every now and then. So I was wondering if it's possible to add the other computer's sound card as a hardware device on my computer so that I can configure, for instance, Winamp to play through the other computer's sound card, hence the speakers connected to the other computer.

    Read the article

  • How to calculate bandwidth limits per user on WiFi network

    - by Lars
    A typical 802.11g access point can provide around 25 Mbps of bandwidth. How is the bandwidth shared among the users? Furthermore, how many users can be served by a single access point using 802.11g in an environment with low interference, and average web activity from the users? The goal is to use bandwidth limitation to avoid starvation for some users in case some of the users start to download a file or stream HD video or some other bandwidth intensive activity. Can someone break down the math on this?

    Read the article

  • Make sniqt recognize all tray abilities (or create a working indicator in Qt)

    - by hakermania
    There is this old thread of mine: How do I create a working indicator with Qt/C++? where I was suggested to use the QSystemTray library for making a tray icon in Ubuntu for my application. Sniqt is a program that takes care of the rest. As known, Ubuntu has got rid of tray icons. Instead, it now uses indicators and only indicators. Sniqt converts the Qt tray icons into working indicators. The problem is that it doesn't do a very decent convertion. Actions like single click, middle click etc do not work, while they do in systems that support tray icons. Is there a way to have these actions back? Can I use QSystemTray icon and still have these interesting (and very helpful, in my occasion) actions in Ubuntu? I would be glad to know the answer to the other thread I talked about earlier (how to make a working indicator using the GTK libraries and prevent the crash), as well. Link for Sniqt bug: https://bugs.launchpad.net/sni-qt/+bug/1027652

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • Exchange 2010 install locks out high level accounts

    - by tearman
    Basically, when we installed Exchange 2010 alongside our Exchange 2003 server (we assume), this is what caused our problem. The Exchange 2010 server is not active, just running on the domain. What's actually going on is that user groups like Enterprise Admins are getting a single deny flag on Full Control over mailboxes currently residing on the Exchange 2003 server which is preventing any of us from making changes. It says these permissions are inherited from the Parent Object, but we have no idea which one that is. Any idea on how to go about fixing this?

    Read the article

  • Connecting a USB laptop to a RJ45 serial port

    - by Jon
    We are about to get our first managed switch at work (Procurve 2520G-24-PoE), and this lowly programmer gets to put on his admin hat and try to configure it. The switch has an RJ45 serial port for console access. My laptop has USB ports but no serial port. In fact, there isn't a single computer in the office with a serial port. I've seen USB-to-DB9 adapters, but I need to go from USB to RJ45 (serial). How would I go about accomplishing this? Do I need two adapters? Will USB-to-DB9 and then DB9-to-RJ45 work? Thanks in advance.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • What are working xorg.conf settings for using a Matrox TripleHead2Go @ 5040x1050?

    - by Brendan Abel
    I'm trying to configure xorg.conf to correctly set the resolution of my screens. I'm using a matrox triplehead, so the monitor is a single 5040x1050 screen. Unfortunately, it's being incorrectly set to 3840x1024. Here is my xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 260.19.06 (buildd@yellow) Mon Oct 4 15:59:51 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Matrox" HorizSync 31.5 - 80.0 VertRefresh 57.0 - 75.0 #Option "DPMS" Modeline "5040x1050@60" 451.27 5040 5072 6784 6816 1050 1071 1081 1103 #Modeline "5040x1050@59" 441.28 5040 5072 6744 6776 1050 1071 1081 1103 #Modeline "5040x1050@57" 421.62 5040 5072 6672 6704 1050 1071 1081 1103 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9800 GTX+" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "5040x1050" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Why does gvim open session with extra/duplicate tabs?

    - by drapkin11
    I'm running gvim, have 2 files open in 2 tabs. I save the current session via the sessionman plugin by Yuri Klubakov. I close gvim (or keep gvim open but close the session, doesn't matter). When I reopen gvim and load the session, I have 3 tabs opened - two of the tabs have the same file! This is not just limited to this single session. When I open some of my other sessions, gvim opens about twice the number of tabs that I expect it to. I disabled this plugin and tried another (session by Peter Odding), but I still get the same problem. Any idea what might be going on behind the scenes?

    Read the article

  • WMII Terminal Width of 80 Columns for xterm (colrules)

    - by BCable
    I'm trying to get WMII to split horizontally at 80 columns for xterm, but I'm only seeing a way to do this via percentage. It would be nice to be able to set it by something other than percentage for various resolutions, but if I have to deal with that I will. The problem is that even percentages don't work at my resolution (1366x768). 47+47 in /colrules yields 79 characters and 48+48 yields 81 characters. As far as I can tell, there is no decimal system allowed so I could do 47.5 for instance. I came from Ion3 and I'm used to using 80 column terminals, resizable by the keyboard, to get a reasonable cut off point for VIM when I'm coding. I would just settle with using the mouse, but WMII seems to be much more fluid than Ion3, so I would have to do it a LOT, which sounds annoying. Any ideas?

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • How to rename everything matching a certain string in a folder

    - by lostiniceland
    Hello Everyone I am running Linux and I have some basic console knowledge but my current problem is quite difficult and I dont know how to achieve this. I want/need to rename everything within a folder that matches a given string. By everything I mean folders/files content within a file content in hidden files Basically I want to refactor a Java-project. Sure, I could use Eclipse to handle the replacing, but this leaves out the folders or resources outside of my workspace. I was thinking of a script that could do the job for me but this seems rather tricky. For instance when it comes to folder-/file-rename I want to replace only the part of the name that matches my string, the rest should remain untouched. Maybe someone already has something like this in his/her script-collection :-) Thanks in advance Marc

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >