Search Results

Search found 18422 results on 737 pages for 'down town'.

Page 353/737 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Can't shutdown Ubuntu (Wubi Installation)

    - by zsgre3nd3s7
    I downloaded Ubuntu 12.04 using Wubi. The installation went correctly and everything was working fine until I tried to shutdown. I clicked shutdown and then Ubuntu started shutdown, but as soon as I saw the Ubuntu logo with blank dots under, it froze. I had to perform a hard shutdown. After booting my computer back and going into Ubuntu, I tried shutting it down again but this time it took me on a black page with lots and lots of log writing on the screen and after a little while, it stopped writing stuff. I was able to input characters using the keyboard and everything, but it never shutdown. I had to perform a hard shutdown again. Now it always gives me a Ubuntu logo and freezes. What should I do? I know hard shutdowns are bad and want to avoid them. Is there anyway to make shutdowns work? I tried a reboot and it also froze on the Ubuntu logo. Sony VAIO Model E SVE17115FDB Laptop. Graphic card - AMD RADEON HD 7650M (and it installed correctly in Ubuntu). BIOS - H2O Bios. Processor - Intel i7-3612QM. Edit: I only installed the AMD/ATI proprietary drivers FGLRX, not the AMD/ATI post release drivers because they keep showing an error message. Here is jockey.log. Edit 2: Here is the log that i mentioned before that appeared on my screen, it appeared after i tried reinstalling my AMD driver but failed so i reinstalled the other one. Sorry for the quality i took those pictures with my phone.

    Read the article

  • Quickbooks and Cisco ASA 5505

    - by user41516
    I have a Cisco ASA 5505, and everything has seemed to function fine, however I have had problems with Quickbooks 2008 running super slow over the network (Samba) and narrowed it down to the Cisco box. Other Samba transfers seem to be pretty fast, and the Quickbooks file is not very large at all (<25 MB), and other Samba transfers seem to be fine. However, I'm not sure if Quickbooks uses another protocol or if there are other ports that need to be opened. Can anybody give me any clues on how to resolve or troubleshoot the problem or have any prior experience with running Quickbooks over a network?

    Read the article

  • Middle mouse click in VirtualBox (Vista host, Debian guest)

    - by Ken
    I'm running Virtualbox on Windows Vista. I have a Microsoft USB mouse (it says "Comfort Optical Mouse 3000") with left and right buttons, and a mousewheel in the middle. If I press down on the wheel, it pretty obviously makes a "click". I'm running Debian inside Virtualbox, and it's working great, but middle-mouse-click does nothing. Left and right click, and scrolling with the wheel, work fine. Is there any way to get middle-mouse-click to work in my virtual machine?

    Read the article

  • Office 2007 install limits?

    - by AppsByAaron
    I've been able to install Office 2003 on a couple of computers that I own without any trouble. However, I just got office 2007 and I'm not able to install it on my computer and my laptop. I know this may violate agreements and whatnot but I'm curious to know if this is something that Microsoft has finally started to crack down on or what. Any information is help full. And I know that I'm going to get a few answers telling me that I'm "stealing" or whatever so I don't need to hear about that. thanks. ;)

    Read the article

  • Access points fighting for dominance?

    - by Phillip Oldham
    We have a small office with a large number of wireless devices (a mixture of desktop machines, laptops, and wifi-enabled phones) all working from a single Apple Airport Extreme which extends our wired network. I've added another Airport Extreme for resiliency, since we've been seeing a decrease in performance and (as far as I understand) access points can only handle a small number of clients. I set the new AP to extend the current network so that the clients weren't constantly switching between different wireless networks, however as soon as this AP was configured all the wireless devices started seeing network trouble, flicking on and off. I'm assuming that this is because both APs are reasonably strong, and the client can't decide which to use. What is the best route to follow to resolve this? What I'm aiming for is wireless resiliency; preferably having two APs share the network load, or if this isn't an option then having a primary AP with a "fail-over", should the primary go down for any reason.

    Read the article

  • How to associate all file types within Wine with its corresponding native application?

    - by MestreLion
    This is easily done for a single file type, as answered in How to associate a file type within Wine with a native application?, by creating a .reg for the desired filetype. But this is for AVI only. I use some wine apps (uTorrent, Soulseek, Eudora, to name a few) that can launch a wide range of files. Email attachments, for example, can be JPG, DOC, PDF, PPS... its impossible (and not desirable) to track down all possible file types that one may receive in an email or download in a torrent. So I neeed a solution to be more generic and broad. I need the file association to honor whatever native app is currently configured. And I want this to be done for all file types configured in my system. I've already figured out how to make the solution generic. Simply replacing the launched app in .reg for winebrowser, like this: [HKEY_CLASSES_ROOT\.pdf] @="PDFfile" "Content Type"="application/pdf" [HKEY_CLASSES_ROOT\PDFfile\Shell\Open\command] @="C:\\windows\\system32\\winebrowser.exe \"%1\"" Ive tested this and it works correctly. Since winebrowser uses xdg-open as a backend, and converts my windows path to a Unix one, the correct (Linux) app is launched. So I need a "batch" updater to wine's registry, sort of a wine-update-associations script that I can run whenever a new app is installed. Maybe a tool that can: List all Mime Types types in my system that have a default, installed app associated Extract all the needed info (glob, mime type, etc) Generate the .REG file in the above format The tricky part is: i've searched a LOT to find info about how association is done in Ubuntu 10.10 onwards, and documentation is scarce and confusing, to say the least. Freedesktop.org has no complete spec, and even Gnome docs are obsolete. So far I've gathered 4 files that contain association info, but im clueless on which (or why) to use, or how to use them to generate the .reg file: ~/.local/share/applications/mimeapps.list ~/.local/share/applications/miminfo.cache /usr/share/applications/miminfo.cache /etc/gnome/defaults.list Any help, script or explanation would be greatly appreciated! Thanks!

    Read the article

  • Priority Manager&ndash;Part 1- Laying out the plan

    - by Patrick Liekhus
    Now that we have shown the EDMX with XPO/XAF and how use SpecFlow and BDD to run EasyTest scripts, let’s put it all together and show the evolution of a project using all the tools combined. I have a simple project that I use to track my priorities throughout the day.  It uses some of Stephen Covey’s principles from The 7 Habits of Highly Effective People.  The idea is to write down all your priorities the night before and rank them.  This way when you get started tomorrow you will have your list of priorities.  Now it’s not that new things won’t appear tomorrow and reprioritize your list, but at least now you can track them.  My idea is to create a project that will allow you manage your list from your desktop, a web browser or your mobile device.  This way your list is never too far away.  I will layout the data model and the additional concepts as time progresses. My goal is to show the power of all of these tools combined and I thought the best way would be to build a project in sequence.  I have had this idea for quite some time so let’s get it completed with the outline below. Here is the outline of the series of post in the near future: Part 2 – Modeling the Business Objects Part 3 – Changing XAF Default Properties Part 4 – Advanced Settings within Liekhus EDMX/XAF Tool Part 5 – Custom Business Rules Part 6 – Unit Testing Our Implementation Part 7 – Behavior Driven Development (BDD) and SpecFlow Tests Part 8 – Using the Windows Application Part 9 – Using the Web Application Part 10 – Exposing OData from our Project Part 11 – Consuming OData with Excel PowerPivot Part 12 – Consuming OData with iOS Part 13 – Consuming OData with Android Part 14 – What’s Next I hope this helps outline what to expect.  I anticipate that I will have additional topics mixed in there but I plan on getting this outline completed within the next several weeks.  Thanks

    Read the article

  • VMWare Server lck file keeps coming back

    - by muncherelli
    I am running VMWare Server 2.0 on a Debian Lenny system as a host OS. I am getting this error when I try to start a Virtual Machine Cannot open the disk '/var/lib/vmware/Virtual Machines//.vmdk' or one of the snapshot disks it depends on. Reason: Failed to lock the file. So I looked around on the web and found that I need to delete the .lck folder and file in order to get this error This seems to happen any time I reboot my Debian Server. The Virtual Machines sometimes do not recover and this lck file is causing problems. Should I create a cron script that does a rm *.lck on each of my machines on reboot? Looking for any direction on how to resolve this. It seems when i do a "reboot" command it is maybe not gracefully shutting down the VMware containers so the lock files are still intact?

    Read the article

  • How to maximise performance in computers connected into LAN via Gigabit ethernet router?

    - by penyuan
    Our group is setting up a server (which might just be a NAS, but we're not sure yet), which shares files, so that it connects to all other computers in the room (about 10 of them). I am thinking just hooking all of them up via a gigabit router/switch. Is there anything I should watch out for, in terms of cables, connections, or the connection capabilities of each computer in the network? For instance, I don't want a slow computer in the LAN to slow down everyone else's connection, etc., etc. Thanks for the education.

    Read the article

  • Ubuntu 12.04 Server ping gateway responds with destination host unreachable

    - by blckblttkd
    I consider myself fairly avid with Ubuntu and Linux, but this one has me stumped. I built up a Xen Server using Ubuntu 12.04 as the base operating system. It has multiple domUs running on it. My home network has a statically defined network where I got all the network connectivity going peachy. The server was moved to a permanent home this morning. So, the network configuration on the main system had to change. Again, another static network, but now I can't ping the upstream gateway from the host. As the VMs use this NIC over a bridge, they too are broken. Ping responds with "destination host unreachable." I simplified the networking down to a simple static network as seen below (no bridge or anything) just to get it to work. Here's the contents of my /etc/network/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 216.7.188.228 gateway 216.7.188.225 netmask 255.255.255.240 broadcast 216.7.188.255 network 216.7.188.0 dns-nameservers 8.8.8.8 8.8.4.4 Here's the contents of route -n 0.0.0.0 216.7.188.225 0.0.0.0 UG 100 0 0 eth0 216.7.188.224 0.0.0.0 255.255.255.240 U 0 0 0 eth0 And the results of pinging the gateway: PING 216.7.188.225 (216.7.188.225) 56(84) bytes of data. From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable Again, this worked in one network flawlessly (obviously with different parameters in the interfaces file). I did try using eth1 (as there are two NICS on the server (in case the MAC address got flipped on bootup). No success there. Yes, the cable is in the right port now :) Any thoughts? I appreciate the help!

    Read the article

  • Strange stuff in apache log

    - by aL3xa
    Hi lads, I'm building some kind of webapp, and currently the whole thing runs on my machine. I was combing down my logs, and found several "strange" log entries that made me a bit paranoid. Here goes: ***.***.***.** - - [19/Dec/2010:19:47:47 +0100] "\x99\x91g\xca\xa8" 501 1054 **.***.***.** - - [19/Dec/2010:20:14:58 +0100] "<}\xdbe\x86E\x18\xe7\x8b" 501 1054 **.**.***.*** - - [21/Dec/2010:15:28:14 +0100] "J\xaa\x9f\xa3\xdd\x9c\x81\\\xbd\xb3\xbe\xf7\xa6A\x92g'\x039\x97\xac,vC\x8d\x12\xec\x80\x06\x10\x8e\xab7e\xa9\x98\x10\xa7" 501 1054 Bloody hell... what is this?!

    Read the article

  • f12 and ctrl + f5 not working correctly

    - by ComatoseDuck
    When I came back to my work computer after being away for a week I found out when I try to clear the cache and refresh the page via ctrl + f5 (or just f5) I get the prompt "Type the Internet address of a document, and Internet Explorer will open it for you" with a drop down list in IE. When I try f5 in Chrome and FF it opens the "Open file" dialog box. When I try to f12 for Dev tools in IE, Chrome & FF it opens up the print dialog box. Why is this happening and what can I do to revert it back to the way it was?

    Read the article

  • jungledisk 3.16 doesn't launch

    - by Angelo
    Has anyone had success with jungledisk 3.16 on ubuntu 11.10? I installed it from the .deb file provided by jungledisk. The install goes fine, but I can't get the "jungle disk desktop" app to launch. It appears in the dash search bar, but doesn't launch or do anything upon selecting it. When I try the command line, I get the following... me@myComputer:~$ jungledisk -V -f Verbose mode enabled Shutting down... me@myComputer:~$ What's the deal here? Anybody else experience this? Does anyone have suggestions for what to try? I opened up a help-ticket with jungledisk, but they just asked me for which ubuntu version and which gui I was using and then went silent. I've used jungledisk since 2008 and had no problems. It is sad that it is not working on the new ubuntu for me. Should I just quit them and use dropbox or one? (those seem to be working)

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Setting up phpMemCacheAdmin on CentOS 5.5

    - by Bill Smith
    I have been able to setup phpMemCacheAdmin (http://code.google.com/p/phpmemcacheadmin/) on CentOS and am able to view the localhost MemCache statistics however whenever I add other MemCached nodes the config is never changed. I am fairly certain it has something to do with permissions however I am unable to track down what exactly needs to be done, or how to do it. The install was pretty straightforward: wget http://phpmemcacheadmin.googlecode.com/files/phpMemCacheAdmin-1.1.3r161.tar.gz tar xvzf phpMemCacheAdmin-1.1.3r161.tar.gz chmod +w Config/Memcache.ini But, it also states that Apache has rw right in the temp file folder (default : Temp/) and the entire config directory (Config/) -- that is the part I am unsure of. Help!

    Read the article

  • Wiki Application With A Reputation System

    - by Christofian
    I'm really impressed with Stack Exchange's concept of reputation (you gain reputation as you post, and the more you post, the more privileges you get), and I want to apply the concept to a wiki that I am building. Does anyone know of a php wiki that has a concept of privileges/reputation similar to Stack Exchange? I'm not necessarily looking for something identical to SE, I'm just looking for a wiki application that gives users more privileges the more they contribute positively to the wiki (SE has down votes, the wiki should have some way of identifying negative contributions too). The privileges should be category based, so the more active you are in a specific category or page, the more privileges you get for that category. There should also be site wide privileges as well, though those should be harder to access than the category privileges. NOTE: If it is not possible to get category wide privileges and site wide privileges, I will be OK with just category wide privileges or just site wide privileges. I should be able to change the requirements for each privilege, through a administration panel or through editing a file (some wiki applications don't have administration interfaces). Does anyone have a script or a solution that will do this? If the script uses something similar to reputation to determine how much a user has positively contributed to the site, then that is OK too. Please Note: I am looking for a way to rate individual user contributions, not a way to rate the quality of an entire page.

    Read the article

  • Laptop freezes on Wireless Mouse Adapter attach/detach

    - by Sergiy Byelozyorov
    I am using Microsoft Wireless Notebook Optical Mouse 3000 with Toshiba Satellite Pro P300-1CG. Recently I have installed Windows 7 64-bit. Now once in a while when I plug-in or plug-out wireless mouse adapter, the computer freezes (image stays on the screen, but the computer is not responsive). I have noticed that it mostly happens after I recover my computer from sleep mode. To continue using laptop I have to shut it down by pressing and holding power button for 7 seconds and then booting it up again. What can be done to prevent this?

    Read the article

  • Using template questions in a technical interview

    - by Desolate Planet
    I've recently been in an argument with a colleage about technical questions in interviews. As a graduate, I went round lots of companies and noticed they used the same questions. An example is "Can you write a function that determines if a number is prime or not?", 4 years later, I find that particular question is quite common even for a junior developer. I might not be looking at this the correct way, but shouldn't software houses be intelligent enought to think up their own interview questions. This may well be the case, but I've been to about 16 interviews as a graduate and the same questions came up in about 75% of them. This leads me to believe that many companies are lazy and simply Google: 'Template questions for interviewing software developers' and I kind of look down on that. Question: Is it better to use a sest of questions off some template or should software houses strive to be more original and come up with their own interview material? From my point of view, if I failed an inteview and went off and looked for good answers to the questions I messed up on, I could fly through the next interview if they questions are the same.

    Read the article

  • Multi Gateway and Backup Routing on a cisco router

    - by user64880
    Hi all, I have a 2611 Cisco Router with only one Fastethernet port Now I have two internet gateways. I want to config my router as when primary routing fails second routing automatically start to route all my packets. When I set 2 IP route command in my router then I check I see it work well but when peer IP on primary routing is down it can not change to second routing until I remove first route command.In the following I write my setting. How can I set it? interface FastEthernet0/0 ip address 81.12.21.100 255.255.255.248 secondary ip address 62.220.97.14 255.255.255.252 ip route 0.0.0.0 0.0.0.0 62.220.97.13 ip route 0.0.0.0 0.0.0.0 81.12.21.97 100 Cheer, Kamal

    Read the article

  • Options for PCI-DSS on AWS - file integrity monitoring and intrusion detection

    - by Brill Pappin
    I need to deploy some file integrity monitoring and intrusion detections software on AWS instances. I really wanted to use OSSEC, however it does not work well in an environment where servers can auto deploy and shut down based on load, because it requires server managed keys to be generated. Including the agent in the AMI will not allow monitoring as soon as it comes up because of that. There are many options out there, and several are listed in other posts on this site, however none that I've seen so far deal with the unique problems inherent in AWS or cloud based deployments in general. Can anyone point me at some products, preferably open source, that we might use to cover those portions of PCI DSS that require this software? Has anyone else achieved this on AWS?

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • Method for spawning enemies according to player score and game time

    - by Sun
    I'm making a top-down shooter and want to scale the difficulty of the game according to what the score is and how much time has Passed. Along with this, I want to spawn enemies in different patterns and increase the intervals at which these enemies are shown. I'm going for a similar effect to Geometry wars. However, I can think of a to do this other than have multiple if-else statments, e.g. : if (score > 1000) { //spawn x amount if enemies } else if (score > 10000) { //spawn x amount of enemy type 1 & 2 } else if (score > 15000) { //spawn x amount of enemy type 1 & 2 & 3 } else if (score > 25000) { //spawn x amount of enemy type 1 & 2 & 3 //create patterns with enemies } ...etc What would be a better method of spawning enemies as I have described?

    Read the article

  • Why isn't 'ether proto \ip host host' a legal tcpdump expression?

    - by Ezequiel Garzon
    In its description of valid tcpdump expressions, the pcap-filter man pages state: The filter expression consists of one or more primitives. Primitives usually consist of an id (name or number) preceded by one or more qualifiers. In turn, these qualifiers are type, dir and proto. So far so good, but further down we find this: ip host host which is equivalent to: ether proto \ip and host host In the first case, ip and host are, respectively, proto and type. What pattern does ether proto \ip follow? Isn't that, as a whole, a proto qualifier? If so, why isn't (a properly escaped) 'ether proto \ip host host' legal (no and)?

    Read the article

  • How to set up Google Apps (mail) MX Records on DNSMadeEasy (screen shot included)

    - by user41847
    I am attempting to complete google's mail MX setup. I would link, but new users can't have two links, and I think the following img is more important. This is what my input options are for DNS Made Easy, which manages my domain: http://img94.imageshack.us/img94/5662/dnsmadeeasy.gif I would like to confirm that I understand the fields correctly. It is my understanding that I am supposed to: Leave Name (Host) Blank Set Data to ASPMX.L.GOOGLE.COM. (and repeat for each of the server addresses provided by google) Set the MX level to what Google has in "Priority" column Set TTL as high as possible Did I get it right? The nightmare scenario is that I screw up, and bring everyone's mail down :P Thanks in advance for your time.

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >