Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 321/654 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • Accidentally deleted the software for MyPassport Essential SE 1TB Hardrive

    - by user26192
    I'm posting for a friend of mine. She bought a WD MyPassport Essential SE 1 TB Hard drive the other day. When she plugged in the USB in her lap top, the driver cannot be recognized by the smart ware software. While she was doing a back up of her files, McAfee was running in the background. Since the backup was taking so long to finish, she decided to pause it. She tried to delete the partially backed up files, but instead, she accidentally deleted the entire file in the folder including the pre-installed software. Now, when she tries to start up the MyPassport, the smart ware doesn't show up anymore. Can someone please give us advice what can she do about this? Thank you.

    Read the article

  • Create a system image in Windows 8

    - by Greg Low
    One of the things that I've just come to accept is that the designers of Windows 8 and I think very differently.It'll take a long time to convince me that shutting down the computer is a "setting". Even after using Windows 8 for quite a while now, I still find that I struggle nearly every day, just trying to do things that I previously knew how to do. That's just not a good thing.Today I decided to create a system image as I hadn't made one lately. I started in Control Panel looking for Backup options. That yielded nothing except programs that wanted to "Save backup copies of my files with file history". I thought "oh well, let's just try the new search options". I hit the Windows key and typed "Backup". No, nothing came up there either.I searched again all over the Control Panel options to no avail.So it was time to hit Google again. Once again, clearly lots of people used to know how to do this and have been trying to work out where this option went.The first trick is that there are a bunch of Control Panel options that don't appear in the Control Panel. In the address bar at the top, if you click on Control Panel, you'll find there is an option that says "All Control Panel Options". That is curious given that's where I thought I was when I opened Control Panel. No hint is given on that screen that there are a bunch of hidden options. None the less, I then checked out "all" the options.The option that you need to create a system image in Windows 8 turns out to be the "Windows 7 File Recovery" option that appears in this extended list. Why does it say "Windows 7" when it's for "Windows 8" as well and I'm running "Windows 8"? Why do I have to choose an option that says "File Recovery" to create a system image backup?<sigh>But at least I've recorded it here for the next time I forget where to find it.

    Read the article

  • How do you proactively guard against errors of omission?

    - by Gabriel
    I'll preface this with I don't know if anyone else who's been programming as long as I have actually has this problem, but at the very least, the answer might help someone with less xp. I just stared at this code for 5 minutes, thinking I was losing my mind that it didn't work: var usedNames = new HashSet<string>(); Func<string, string> l = (s) => { for (int i = 0; ; i++) { var next = (s + i).TrimEnd('0'); if (!usedNames.Contains(next)) { return next; } } }; Finally I noticed I forgot to add the used name to the hash set. Similarly, I've spent minutes upon minutes over omitting context.SaveChanges(). I think I get so distracted by the details that I'm thinking about that some really small details become invisible to me - it's almost at the level of mental block. Are there tactics to prevent this? update: a side effect of asking this was fixing the error it would have for i 9 (Thanks!) var usedNames = new HashSet<string>(); Func<string, string> name = (s) => { string result = s; if(usedNames.Contains(s)) for (int i = 1; ; result = s + i++) if (!usedNames.Contains(result)) break; usedNames.Add(result); return result; };

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Windows 7 boot order and locations

    - by Russ C
    Hi, Long story short, a program that shouldn't have been run on this machine has been, and it's created a naughty .sys file that is being loaded right after pci.sys (as determined by NBTLog.txt) I've had a look a BCDEdit, EasyBCD and a number of Registry keys but I can't seem to determine where about winstart.exe actually gets the list of sys files to load from! The sys file itself is running in high elevation and appears to be defeating all attempts to remove it; I could (probably should) make a Linux USB boot disc and use it to delete the sys file, but I'd really appreciate understanding the mechanics here. ((FWIW: the problem stemmed from a sibling running a Trainer for some game; he has been suitable chastised))

    Read the article

  • Installing isolated instance of MySQL on Windows using silent install with .msi

    - by Abram
    I'm trying to write an installer for an internal application we wrote. After it installs our application it then installs MySQL using the .msi installer in silent mode. I specify the install dir and data dir to that of a directory within my application's install directory, such as: msiexec /i @@MYSQL_INSTALLER_FILE@@ /qn INSTALLDIR="@@INSTALL_DIR@@\MySQL\" DATADIR="@@INSTALL_DIR@@\MySQL\" USERNAME="@@DB_USER@@" PASSWORD="@@DB_PASS@@" (the @@variable@@'s are replace by my installer routine using InstallJammer) Once installed, I use mysqld.exe to install a windows service with a custom service name and defaults file like so: mysqld.exe --install CustomMySQL --defaults-file="@@INSTALL_DIR@@\MySQL\my.ini" This works fine as long as there is not already another instance of MySQL installed. If there is it silently fails to install MySQL. Running the msi installer manually (double-click) shows an error that a previous version is already installed and just aborts. Is there a way to automate installing MySQL as an isolated instance, regardless of whether another version/instance is already installed?

    Read the article

  • Back in Town and Ready for New Beginnings

    - by MOSSLover
    Originally posted on: http://geekswithblogs.net/MOSSLover/archive/2013/11/03/back-in-town-and-ready-for-new-beginnings.aspxI just took a super long trip that lasted from September 27th until today.  I flew into St. Louis and then rented a car and drove over 12,000 miles.  I just dropped the rental off last night.  I went to a ton of states, did a lot of really cool things, saw a lot of really cool people, and bought a ton of beer.  I made some decisions, but this post isn't really about my decisions.  It's more about the question that everyone has been asking, "Where am I going to work?".So here's the answer...BlueMetal Architects as a Senior SharePoint Engineer.  Here is their website: http://www.bluemetal.com/.  I basically start tomorrow.  I didn't want to post anything super early, because I didn't want to jinx things.  I am really excited.  Now that I'm back I'm hoping that things will start to turn around for me.  I look forward to the future.

    Read the article

  • Failing SSHFS connection drags down the system

    - by skerit
    From time to time my sshfs mount fails. All programs using the mount freeze when it happens. I can't even ls anything or use nautilus. Is there a way to find out what's the cause and how to handle it? I've noticed regular SSH sessions to the server get their fair share of Write failed: broken pipe disconnects, too. If I wait long enough (and I'm talking about 20-ish minutes, here) it will auto reconnect and things start working again.

    Read the article

  • How do I install yum on Redhat Enterprise 4?

    - by Bob Cross
    For historical reasons, one of the machines that I manage has a Redhat Enterprise 4 boot disk (among others). Every now and then, we have to boot into RHEL4 to bring up some of the legacy software that we support and connect to. Since it's a fringe system, the Redhat support has long since lapsed and I can't convince myself that it would be worth paying just to get RPMs that I can go and get for myself. That said, the default RHEL tools are heavily biased against letting you do exactly that. I would like to install yum and use that as my package discovery and installation. So, is there an installation guide to integrating yum with an older RHEL 4 system?

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • I don't program in my spare time. Does that make me a bad developer?

    - by not-my-real-name
    A lot of blogs and advice on the web seem to suggest that in order to become a great developer, doing just your day job is not enough. For example, you should contribute to open source projects in your spare time, write smartphone apps, etc. In fact a lot of this advice seems to suggest that if you don't love programming enough to do it all day long then you're probably in the wrong career. That doesn't ring true with me. I enjoy my work, but when I come home from the office I'm not in the mood to jump straight back onto the computer and start coding away until bedtime. I only have a certain number of hours free time each day, and I'd rather spend them on other hobbies, seeing friends or going outside than in front of the computer. I do get a kick out of programming, and do hack around outside of work occasionally. I'm committed to my personal development and spend time reading tech blogs and books as a way to keep learning and becoming better. But that doesn't extend so far as to my wanting to use all my spare time for coding. Does this mean I'm not a 'true' software developer at heart? Is it possible to become a good software developer without doing extra outside your job? I'd be very interested to hear what you think.

    Read the article

  • Pipe an infinite stream to internal loop?

    - by Sh3ljohn
    I've seen a lot of things about redirecting stdout to a TCP socket, but no real example of how to do it in practice, specifically when the output stream generated by the first "command" never ends. To talk about something concrete, let's take programs like servers that typically output their log endlessly to stdout (well, as long as they run). If you redirect the output to a log file on the disk, then this file is always open (therefore not readable by others?) and grows infinitely, which eventually is going to cause problems. This might be a nood question, but I don't know what it does or how to do it so. How to redirect the output of a command to the internal loop? I want to make sure that data is sent EVERY time something is written to stdout, and that the pipe won't wait for the command to end (never happens ideally!). Is that right? If 2 is true, is there a buffer system to send chunks of data once it reaches a certain size only? Could you give me concrete command line examples to do the above? Thanks in advance

    Read the article

  • Realtime file-level mirroring from local NTFS to network drive

    - by hurfdurf
    We have some data collection machines running WinXP. After a new file is written, we would like to immediately copy the new file to network storage (a NetApp CIFS share) automagically. We need realtime or near realtime copies generated (copy upon filehandle close would be fine -- these are not long-running system logs). Two commercial applications I've found so far are MirrorFile and IBM's Tivoli CDP. Are there any reliable open source programs or simple ways to get Shadow Copy to do something similar? Bonus points if it runs as a service.

    Read the article

  • Are there any good examples of open source C# projects with a large number of refactorings?

    - by Arjen Kruithof
    I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP-like) project that may serve as a test subject for extracting refactorings from version control history. Which open source C# projects have undergone large (number of) refactorings? Criteria A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all. EDIT: As Peter argues, ideally all commit comments would be teleological (goal-oriented). Practically, if a comment is made at all it is often descriptive, merely a summary of the changes. Sadly we're a long way from automatically inferring developer intentions!

    Read the article

  • BSOD doing wireless connection repair

    - by Kb
    (This should maybe be asked on Super User, but currently it is only for beta users.) We have a Dell Latitude X1 which always gets a BSOD (page fault in non page area) after repair on wireless. Wireless works as long as we do not do repair. Have tried the following with no success: Have updated BIOS to latest version and updated drivers for Intel wireless to the latest version. Any other suggestions? For now we just do not do repair and maybe that is the solution?

    Read the article

  • How to batch convert video files on OSX for AppleTV2 / iPhone4?

    - by Luke404
    I'd like to have a solution to batch convert video files to a format suitable for the AppleTV2, iPad2, iPhone4, while at the same time preserving as much quality as possible; I want a single output file that will play on both devices and also good for consumption by other Mac software (eg. Aperture, iMovie, iTunes). Batch processing is a requirement since I'm gonna convert many many files from different sources (mainly lots of videos captured by compact digital cameras, cell phones, and so on). I'm looking into ffmpeg and MEncoder (both installed via MacPorts), but I can't seem to find a suitable preset for libx264 even if everyone out there is talking about them. A different approach involving different software would be ok too as long as I can script it somehow and run it on a whole directory full of files to be converted.

    Read the article

  • After a period of time, nslookup still works, but pinging, and an auto-refeshed website, fails.

    - by Mark Hurd
    Contrary to this SO question this is for a dotted name (gw.localnet.au), and it doesn't happen straight away. Only after some period of time (quite a long time, possibly days). In fact this is for my ADSL router and its internal IP address which I have named within the router itself and in my Windows Server 2003 Domain Controller DNS Service. Specifically, localnet.au is a Active-Directory-backed primary domain. In fact, an ipconfig /flushdns may fix the problem, but only after a while (about the time it took me to type in this question :-) ). That doesn't explain the root cause though... Manually transferred from stackoverflow.com

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

  • Installation fails after period of time

    - by Jack Marchetti
    I'm running Windows 7 64 bit, Ultimate. I'm running into a problem where, if my computer is on long enough, I am unable to install any software. For example, if I reboot and try to install something, for the most part, it will succeed. if I try and install something after a few hours/days, it will always fail. Sometimes stating that "a previous install failed..." or "an install is already taking place..." even though there is nothing being installed. This also seems to be making my Windows updates fail at a pretty regular rate. Any ideas?

    Read the article

  • ISA bus on newer computers

    - by Kevin Ivarsen
    Are there companies that sell new computers that support old ISA bus expansion cards? We have an aging computer running DOS that operates some machinery via an ISA interface board. Updated versions of this board (e.g. PCI, USB) are not available, and I am concerned about the long-term reliability of the 8+ year old computers we currently keep around as backups. If these newer ISA-capable machines exist, are there any general gotchas to be aware of in terms of compatibility with older expansion boards, ability to run DOS, etc.?

    Read the article

  • Something is spamming from my hMail server - how can I deal with this?

    - by joshcomley
    My Windows 2008 server is attempting to send out a lot of spam, I've just discovered, and I'm not sure how to see where the compromise is. For example: has someone hacked an account? Has someone hacked the server? Is there a virus on the server? What can I do to investigate this? Edit Thanks for the replies so far. I am running hMail server, and have spent so long investigating the correct configuration but still I end up with these emails being sent. Here is a screenshot of my Internet IP range settings on the server: (let me know what else I can provide to help)

    Read the article

  • Is it possible to turn a shortcut to a folder into a menu of items in that folder?

    - by MrVimes
    I vaguely remember seeing something on the internet that showed it was possible to turn any shortcut to a folder into a menu, as long as that shortcut was in the start menu or on the quicklaunch bar (i.e. somewhere that allowed menu functionality) Does anyone know if this is possible? And if so, how to do it? I'd like to be able to do this... With a link in my quicklaunch area... I remember it had something to do with renaming the shortcut with along string of characters placed between '{' and '}'. I realize how picky this request is as I have more or less acheived what I am looking for by placing the 'desktop' toolbar on my start bar. But I'd rather it be an icon in my quicklaunch. Just humour me :)

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >