Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 550/1159 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • how much more memcache memory do i need to get 95% hit ratio? [on hold]

    - by OneSolitaryNoob
    I have a memcache instance running that has a 90% hit ratio. How can I estimate how much more memory it needs to get to a 95% hit ratio? edit: This question was blocked, but I do not think this is impossible to answer. After all, anyone that's used a caching system HAS answered this question, most likely with trial&error&luck. I can look at my usage patterns. I can increase or decrease memory and see how hit rate changes. Both of these provide data that informs an estimate. But what's a good/better/best way to do this?

    Read the article

  • Cocos2d sprite's parent not reflecting true scale value

    - by Paul Renton
    I am encountering issues with determining a CCSprite's parent node's scale value. In my game I have a class that extends CCLayer and scales itself based on game triggers. Certain child sprites of this CCLayer have mathematical calculations that become inaccurate once I scale the parent CCLayer. For instance, I have a tank sprite that needs to determine its firing point within the parent node. Whenever I scale the layer and ask the layer for its scale values, they are accurate. However, when I poll the sprites contained within the layer for their parent's scale values, they always appear as one. // From within the sprite CCLOG(@"ChildSprite-> Parent's scale values are scaleX: %f, scaleY: %f", self.parent.scaleX, self.parent.scaleY); // Outputs 1.0,1.0 // From within the layer CCLOG(@"Layer-> ScaleX : %f, ScaleY: %f , SCALE: %f", self.scaleX, self.scaleY, self.scale); // Output is 0.80,0.80 Could anyone explain to me why this is the case? I don't understand why these values are different. Maybe I don't understand the inner design of Cocos2d fully. Any help is appreciated.

    Read the article

  • General questions regarding open-source licensing

    - by ndg
    I'm looking to release an open-source iOS software project but I'm very new to the licensing side of the things. While I'm aware that the majority of answers here will not lawyers, I'd appreciate it if anyone could steer me in the right direction. With the exception of the following requirements I'm happy for developers to largely do whatever they want with the projects source code. I'm not interested in any copyleft licensing schemes, and while I'd like to encourage attribution in derivative works it is not required. As such, my requirements are as follows: Original source can be distributed and re-distributed (verbatim) both commercially and non-commercially as long as the original copyright information, website link and license is maintained. I wish to retain rights to any of the multi-media distributed as part of the project (sound effects, graphics, logo marks, etc). Such assets will be included to allow other developers to easily execute the project, but cannot be re-distributed in any manner. I wish to retain rights to the applications name and branding. Futher to selecting an applicable license, I have the following questions: The project makes use of a number of third-party libraries (all licensed under variants of the MIT license). I've included individual licenses within the source (and application) and believe I've met all requirements expressed in these licenses, but is there anything else that needs to be done before distributing them as part of my open-source project? Also included in my project is a single proprietary, close-sourced library that's used to power a small part of the application. I'm obviously unable to include this in the source release, but what's the best way of handling this? Should I simply weak-link the project and exclude it entirely from the Git project?

    Read the article

  • Snow Leopard: Optimization

    - by Shyam
    Hi, I have bunch of questions: I have a Mac network, which has five Mac's. Right now, they are individually getting software updates. Is there a way to download the patches/security updates in a single place (repository) and point all machines to this location? Personally, I have tools like Monolingual and Onyx, but are there tools you could recommend that affect the performance of the Operating System positively? Tweaks would be nice. Links and pointers, would be really appreciated. I've read about Time machine, is there a way to backup all machines to a network drive using this tool? Thanks!

    Read the article

  • mdadm starts resync on every boot

    - by Anteru
    Since a few days (and I'm positive it started shortly before I updated my server from 13.04-13.10) my mdadm is resyncing on every boot. In the syslog, I get the following output [ 0.809256] md: linear personality registered for level -1 [ 0.811412] md: multipath personality registered for level -4 [ 0.813153] md: raid0 personality registered for level 0 [ 0.815201] md: raid1 personality registered for level 1 [ 1.101517] md: raid6 personality registered for level 6 [ 1.101520] md: raid5 personality registered for level 5 [ 1.101522] md: raid4 personality registered for level 4 [ 1.106825] md: raid10 personality registered for level 10 [ 1.935882] md: bind<sdc1> [ 1.943367] md: bind<sdb1> [ 1.945199] md/raid1:md0: not clean -- starting background reconstruction [ 1.945204] md/raid1:md0: active with 2 out of 2 mirrors [ 1.945225] md0: detected capacity change from 0 to 2000396680192 [ 1.945351] md: resync of RAID array md0 [ 1.945357] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1.945359] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 1.945362] md: using 128k window, over a total of 1953512383k. [ 2.220468] md0: unknown partition table I'm not sure what's up with that detected capacity change, looking at some old logs, this does have appeared earlier as well without a resync right afterwards. In fact, I let it run yesterday until completion and rebooted, and then it wouldn't resync, but today it does resync again. For instance, yesterday I got: [ 1.872123] md: bind<sdc1> [ 1.950946] md: bind<sdb1> [ 1.952782] md/raid1:md0: active with 2 out of 2 mirrors [ 1.952807] md0: detected capacity change from 0 to 2000396680192 [ 1.954598] md0: unknown partition table So it seems to be a problem that the RAID array does not get marked as clean after every shutdown? How can I troubleshoot this? The disks themselves are both fine, SMART tells me no errors, everything ok.

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • How do I setup a systemd service to be started by a non root user as a user daemon?

    - by Hans
    I just finished the install and setup process of systemd on my arch-linux system (2012.09.07). I uninstalled initscripts (and removed the configuration files). What I want to do is create a service that can be started and stopped by a non-root user. The service is to start a detached screen session running rtorrent. However I want every user on the system who has set this service to start (enabled) to have a particular instance started for them specifically. How would one go about doing this? I remember reading that systemd supports user instances of services, however I have been unable to find any information on how to set this up, or whether it relates to what I am looking for. Service file that I have used for system: [Unit] Description=rTorrent [Service] Type=forking ExecStart=/usr/bin/screen -d -m -S rtorrent /usr/bin/rtorrent ExecStop=/usr/bin/killall -w -s 2 /usr/bin/rtorrent

    Read the article

  • 7zip: how to extract to std output?

    - by Jason S
    I have 7z 4.65 and am trying to extract a single file to standard output. The 7z command-line help says -so is the command-line parameter to extract to standard output, but when I try this: >>> 7z e -so dist\dlogpkg.jar META-INF/MANIFEST.MF 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Error: I won't write data and program's messages to same terminal how can I fix this? There doesn't seem to be a command line param to suppress the normal 7z stdout messages. (edit: the equivalent operation in "unzip" would be unzip -p dist\dlogpkg.jar META-INF/MANIFEST.MF which works fine. But I'd like to use 7z for various reasons.)

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • Recurring events repeatedly saves a draft every minute

    - by Henrik Rasmussen
    Using Outlook 2010, some of my recurring (planned, not drafts) events is saving a draft to my Drafts folder every single minute as long as it's active. An example taken from real life is that I have a calendar entry (Appointment) occuring every day from 24-09-2012 until 28-09-2012 from 08:00 to 16:00 (GMT+1) with a blue category, only one participant (me) with subject but without a place. So every minute from 24-09-2012 until 28-09-2012 from 08:00 to 16:00, but not from 16:00 to 08:00, a new draft is automatically saved in my Drafts folder. How do I get rid of that behaviour? Addition here: Removing the offending event just allows a new one to take its place. There doesn't seem to be much on the sites - Microsoft calls it a "personal" issue, but there are more and more instances.

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • Anyone love/hate the PowerConnect line of switches from Dell?

    - by Rob Bergin
    I am looking at replacing some unmanaged 16 port store bought GB switches and wanted to go with Cisco but it may be cost prohibitive. Instead I am looking at ProCurve or Dell's PowerConnect line up. I am looking for SNMP, Management, VLANs, and SFLOW would icing on the switch cupcake. I would get the 6224 or the 6248 and then maybe add the RPS-600 to it for redundant power. I think the RPS-600 supports multiple switches. Rackspace is also a little challenge so I am trying to do it with as little Rack Units as possible. Ideally I would go with two 6224's or a single 6248 and then do two VLANs. Thanks for any feedback. Rob

    Read the article

  • How can I connect to my ACT database to export data?

    - by Adam Gessel
    I am trying to export data from an MSSQL server that ACT uses. It is ACT 2005. I have tried tons of different things, from trying to starting the MSSQL server in single user mode (still can't login), I have tried copying the mdf files from it and putting it on another server (it complains about having the same name as another database for master.mdf and almost every other file), I have tried putting Administrator in the group that the MSSQL instance runs under, and nothing seems to work! Can anybody with experience with this help me out? Thanks!

    Read the article

  • Does command/query separation apply to a method that creates an object and returns its ID?

    - by Gilles
    Let's pretend we have a service that calls a business process. This process will call on the data layer to create an object of type A in the database. Afterwards we need to call again on another class of the data layer to create an instance of type B in the database. We need to pass some information about A for a foreign key. In the first method we create an object (modify state) and return it's ID (query) in a single method. In the second method we have two methods, one (createA) for the save and the other (getId) for the query. public void FirstMethod(Info info) { var id = firstRepository.createA(info); secondRepository.createB(id); } public void SecondMethod(Info info) { firstRepository.createA(info); var key = firstRepository.getID(info); secondRepository.createB(key); } From my understanding the second method follows command query separation more fully. But I find it wasteful and counter-intuitive to query the database to get the object we have just created. How do you reconcile CQS with such a scenario? Does only the second method follow CQS and if so is it preferable to use it in this case?

    Read the article

  • Remote login/access on windows

    - by acidzombie24
    Hi I was wondering what software I can use to access my and other machines remotely? I used ssh which is nice but i don't know how it would be like on windows. (I assume its the same idea but windows console instead of a bash terminal?) Windows has a lot of applications that require GUI/MouseClicks. Actually I don't know a single ssh or vpn command line installer not that i'm complaining (but is helpful if you can mention some). I haven't use a VPN, is this taking control of a users screen/session? Or is it another instance/session as if you logged in as a different user on that box? What solutions are at my disposal for windows? (7)

    Read the article

  • Collision Resolution

    - by ultifinitus
    Hey all, I'm making a simple side-scrolling game, and I would appreciate some input! My collision detection system is a simple bounding box detection, so it's really easy to implement. However my collision resolution is ridiculous! Currently I have a little formula like this: if (colliding(firstObject,secondObject)) firstObject.resolve_collision(yAxisOffset); if (colliding(firstObject,secondObject)) firstObject.resolve_collision(xAxisOffset); where yAxisOffset is only set if the first object's previous y position was outside the second object's collision frame, respectively xAxisOffset as well. Now this is working great, in general. However there is a single problem. When I have a stack of objects and I push the first object against that stack, the first object get's "stuck," on the stack. What's I think is happening is the object's collision system checks and resolves for collisions based on creation time, so If I check one axis, then the other, the object will "sink" object directly along the checking axis. This sinking action causes the collision detection routine to think there's a gap between our position and the other object's position, and when I finally check the object that I've already sunk into, my object's position is resolved to it's original position... All this is great, and I'm sure if I bang my head against a wall long enough i'll come up with a working algorithm, but I'd rather not =). So what in the heck do you think I should do? How could I change my collision resolution system to fix this? Here's the program (temporary link, not sure how long it'll last) (notes: arrow keys to navigate, click to drop block, x to jump) I'd appreciate any help you can offer!

    Read the article

  • Another installation is in progress

    - by Steven
    Why I try to install any program I see "Another installation is in progress. You must complete that installation before continuing this one." error. Googled the web and found that solution would be to delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\InProgress registry key and reboot. That didn't help me unfortunately. When I open "Services" mmc snapin it shows that "Windows Installer" service is "Started", but "Start/Stop/Pause/Restart" buttons are grayed (the interesting thing is that startup type = "Manual", so I don't really know how to explain that I already have 2 instances of msiexec.exe in memory and one instance is consuming 50Mb of memory. Looks like there's a serious issue with my installer service - is there any way to fix it (please do mind - I can't install anything!) Any help would be greatly appreciated.

    Read the article

  • Is it possible to hide folders/subfolders from users based on permissions?

    - by Uwe Keim
    Having a Windows Server 2008 R2 that has a share with lots of nested folders, I want to be able to only show certain folders to certain AD users/AD user groups. Is it possible to configure the permissions on single folders, so that clients that connect with Windows XP/Windows 7 to the share on the Windows 2008 R2 server only see those folders for which they have "view" permission? Other clients should not see the folders at all in Windows Explorer. I was told that this seems to be a standard feature on Novell networks.

    Read the article

  • Cannot to connect to a Cassandra DB from localhost

    - by DJYod
    Hello, I don't know if I'm on the right site, I installed OpenSolaris a single cassandra node, I don't have other node. On the same server, I install Ruby 1.8 with the gem Cassandra. If I try to connect from my computer to the Cassandra node through the ruby gem cassandra, I can connect perfectly, if I try to to the same from the ruby gem cassandra in the server, it says that there is no listening on 127.0.0.1. I can connect locally to the instance using telnet 127.0.0.1 9160 and it works... any idea? Thank you!

    Read the article

  • Import LDIF file to external server

    - by colemanm
    As a follow-up to my previous question, which I've resolved part of, what we're trying to do now is take an exported .ldif file of the "Users" container on our OS X Server and import it into a separate OpenLDAP server on an EC2 instance. This we'll use for LDAP user authentication of other apps without having to open our internal network to LDAP traffic. The exported .ldif file thinks the DN of the "Users" container is cn=users,dc=server,dc=domain,dc=com. Is it easiest to configure the EC2 OpenLDAP server to think that it's domain is the same so the container is imported to the proper place? Or should we edit the text of the .ldif file to change the DN to match the external naming? Hopefully that makes sense... but I'm confused as to the best way to accomplish this.

    Read the article

  • Amazon EC2 SQL Server Connection

    - by cnxmax
    I have two instances running on Amazon AWS EC2. One is running MSSQL Server 2005, the other is running a web application. I CAN connect to the database in my app using a connection string that references the Public IP of my EC2 instance running SQL Server. I CANNOT connect from the web app server if I change the connection string to reference the database servers Private IP Address. But I can connect if I run that same code on the database server itself. I can remote desktop from the app server to the database server using the private IP. I have a feeling there is something in my SQL Sever configuration that is preventing this remote connection. I have remote connections enabled, I have it set to listen on all IP addresses. Any ideas? Other things I've done: - Added exceptions to Windows Firewall - Tried connecting to using EC2 DNS Names

    Read the article

  • Using standard e-mail address as user system wide name

    - by PeterMmm
    I'm going to re-build a very old Lotus Notes infrastructure coming from 4.x towards 8.5. I'm trying to setup Domino so that all user names should be of a single string or the internet e-mail address. For example the user "John Smith/ACME" should be in the whole system jsmith or [email protected] . I still get jsmith/ACME all around. Where it is most annoying is in the NAB when creating a new message. Is there a way to get all addresses in uniform standard e-mail adress format at least in mail ? The mixup in the destination like "John Smith/ACME, [email protected]" confused the users.

    Read the article

  • How can I force Parallels' networking to obtain an IP through a wireless router?

    - by RLH
    Here is my setup. I have a Macbook, Thunderbolt display and an Ethernet connection plugged into the Thunderbolt display. During the day, most of my network use can (and should) operate across the ethernet associated with my display. However, I also need to be able to connect up to a wireless router. This hasn't been a problem on the Mac OS X side, but the program that I need to run on the router has to obtain an IP address from the wireless access point. Considering my current setup, how can I leave it so that I can access the internet in OS X, yet have my Window 7 instance running in Parallels, get it's assigned IP address from a wireless router that my Mac is also connected to? I've fiddled around with the Parallel's network settings for an hour, and I can't get Parallel's to see the router, even though my Mac is certainly connected to it.

    Read the article

  • Best approach for utility class library using Visual Studio

    - by gregsdennis
    I have a collection of classes that I commonly (but not always) use when developing WPF applications. The trouble I have is that if I want to use only a subset of the classes, I have three options: Distribute the entire DLL. While this approach makes code maintenance easier, it does require distributing a large DLL for minimal code functionality. Copy the classes I need to the current application. This approach solves the problem of not distributing unused code, but completely eliminates code maintenance. Maintain each class/feature in a separate project. This solves both problems from above, but then I have dramatically increased the number of files that need to be distributed, and it bloats my VS solution with tiny projects. Ideally, I'd like a combination of 1 & 3: A single project that contains all of my utility classes but builds to a DLL containing only the classes that are used in the current application. Are there any other common approaches that I haven't considered? Is there any way to do what I want? Thank you.

    Read the article

  • Code Contracts and Pex at MSDN Live 2010

    - by terje
    One of the 6 sessions I and Mikael Nitell is running on MSDN Live 2010 here in Norway is about Code Quality, and part of that session goes through the use of Code Contracts and Pex.  Both fantastic tools ! They can be used togethers, but are also completely independent from each other, and can be used as a single Code Contracts  has to downloaded separately from VS 2010 (works also on VS 2008).   Start looking at http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx . This download is a free download.   Code Contracts originates form the ideas of Bertrand Meyer – Design by Contract, take a look here. Pex is found on the MSDN Subscription download, so it requires an active MSDN Subscription. Start to get it from here http://research.microsoft.com/en-us/projects/pex/downloads.aspx .  The current version as of 14.4.10 is 0.9, which works with the 2010 RC.  A new version is due this week.  Pex is a tool to generate unit tests, and does this very intelligently.  Perfect to make tests for legacy code, but also to make sure you get all paths tested.  See the Reference information and project startup information.

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >