Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 324/654 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Unity gizmos vs. referenced game objects

    - by DuckMaestro
    I'm designing a Unity script that I intend to be highly reusable and as easy as possible to setup within the editor. To this end, a number of properties of this script really need some kind of visual representation on screen. It is an unresolved question to me whether the design of the script should require references to placeholder game objects, OR just Vector3's and float's that have associated gizmos drawn for them. Normally a gizmo would be a natural choice, except that Unity gizmos are not directly manipulable (as far as I can tell). Because of this shortcoming I'm having to consider whether depending on references to placeholder game objects is a more designer-friendly approach ultimately, in spite of the extra setup required, and that it might be counter-intuitive when the placeholder game objects disappear at run-time (which my script would do). Is there a community standard or preference here in this case? Can a Unity-experienced game programmer / designer speak to which approach they feel is more intuitive or more convenient to setup, when using a 3rd party script? Or is this just splitting hairs as long as I ship an example prefab with my script?

    Read the article

  • Is Ruby on Rails supposed to have a steep learning curve or is it just me?

    - by Anita
    I'm a self-taught programmer. I've been learning RoR since October with varying intensity (sometimes all day, sometimes nothing for several weeks). Before that I knew only Java, but knew it pretty well. I've heard so much hype about RoR and how it's supposed to make you happy, productive, etc. So far it's only made me frustrated. I learned it out of the Agile book, and I suspect part of the difficulty might have to do with my not knowing JavaScript and CSS, and having only a shaky grasp of databases and HTML. But apparently it took me much longer to complete the project in the Agile book than other people, and I still don't remember much of it. There are some things about Rails that I just can't seem to get, e.g. when to use symbols and when NOT to, or how dynamic methods are called. Recently I was given a small Rails assignment where I'm asked to make a small change to the interface. It's taken me around 25 hours and although I've made some progress in understanding the code, I still have no idea how to proceed. I can't even ask Stack Overflow because there is so much code I'll have to provide to give context. So my question is in the title: is RoR supposed to take a long time to learn or am I just slow? Can it be that I've been learning from the wrong book? My learning style is such that I either understand nothing or understand everything, if that makes sense. Thanks!

    Read the article

  • How to disable Empty Recycle Bin confirmation dialog?

    - by kjo
    In Windows 7 (at least) when one chooses Empty Recycle Bin from the RB menu, one gets prompted with a dialog like: Are you sure you want to permanently delete these 11 items? (modulo the actual number of items mentioned). Is there a way to disable this dialog (without disabling the use of the RB altogether)? NOTE 1: This dialog comes up even if one has unchecked the box labeled "Display delete confirmation dialog" in the RB Properties dialog (as long as the RB has not been disabled). NOTE 2: As alluded to above, any answer that entails disabling the use of the Recycle Bin altogether is explicitly ruled out. This includes any answer that involves selecting the button labeled "Don't move files to the Recycle Bin. Remove files immediately when deleted." in the RB Preferences window. NOTE 3: This question has nothing to do with Outlook Express.

    Read the article

  • Is The Ease Of Windows Phone Development Ruining Its Image

    - by Tim Murphy
    I was reading an article on Mashable recently by a long time iPhone user who is living solely on a Lumia 920 at the moment and giving her assessment.  One thing that struck a nerve with me was her describing the Windows Phone ecosystem as immature.  She wasn’t saying this because of the number of apps or the big names like most people do.  She means the quality of the apps in the store. This hit a nerve with me.  I find it hard to believe that the majority of app on iOS are of any higher quality than any other platform.  I believe in any ecosystem you are going to find some high end, high quality apps, but the majority by default will be from people who are trying to solve a problem but do not have the resources to have top graphics and full blown testing.  There will also be a large number that are just there trying to trick you into giving up some cash. Does any of the mean that we shouldn’t take notice of this complaint?  Of course not!  We should always strive to publish the best quality apps possible.  Don’t do things like leaving default app icons and backgrounds.  Put a little effort into your design.  You should also spend as much time as possible ensuring against crashes and giving the user the best experience possible.  Think through your apps organization and navigation.  Go the extra step of putting it into beta and letting select people use it and give you feedback before going to full release. Remember, if we want people to appreciate the Windows Phone platform we have to make sure we give them apps that they are going to enjoy using. del.icio.us Tags: Windows Phone,iPhone,iOS,Nokia,Lumia 920,Mashable

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • SharePoint (2010) - Can't delete Service Application

    - by Chris W
    The search service application in our farm went bonkers complaining that it couldn't connect to itself. After multiple people fiddling to try and fix it we've ended up with two search application. The new one, which is working perfectly, and the original one which is very unhappy. I've tried deleting the original Search App in Central Admin but it just won't go - it sits on the screen saying "Processing" but it never completes regardless of how long it is left. There's lot's happening in the logs but I can't really decipher exactly why this isn't working. Things are working fine within the farm but I'd ideally like to clean up this old application if possible. Are there any other options like deleting it with stsadm? I've had a dig but can't seem to find the commands to enumerate the service applications and then delete the correct one.

    Read the article

  • Windows CE Remote Kernel Tracker - gathering data in one (more) file during a log period of time

    - by Nic
    I'm using the "Windows CE Kernel Tracker" tool to gather data from my embedded device. This is working fine for short period of time. It seems that the tool is getting data in memory and not on disk. I'm wondering if there is a way to take the data from the device and log it in one or more file on my development computer. This could be useful for long time test period : for instance, one night or one entire day. Any ideas? p.s. I don't want to log on to the device, I want to log on my development PC.

    Read the article

  • web sites leaking memory? IIS 7.5 Windows server 2008 R2

    - by Charles
    I have several web sites on my windows 2008 server that have been working flawlessly for over a year. Just a few days ago I ran into an issue where my server stopped serving up pages on some of these sites for no apparent reason. I dug into it a little more today and I see that some of my sites (they're all asp.net mvc 3.0 sites), are consuming over 460MB of memory. Like I said, this just started the other day after a very long period of time of no issues at all. I have two questions: 1) is there a way to throttle how much memory is consumed by the w3wp process before I can force it to restart (restart the app pool for a particular site) so that it doesn't keep hogging all of the memory? 2) any ideas what could have caused this to start happening?

    Read the article

  • Quitting a small start-up where you are a primary developer?

    - by programmx10
    Just curious to hear from other people who may have been in similar situations. I work for a small startup (very small) where I am the main developer for a major part of the app they are building, the other dev they have does a different area of work than I do so couldn't take over my part. I've been with the company 5 months, or so, but I am looking at going to a more stable company soon because its just getting to be too much stress, overtime, pressure, etc for too little benefit and I miss working with other developers who can help out on a project. The guy is happy with my work and I think I've helped them get pretty far but I've realized I just don't like being this much "on the edge" as its hard to tell what the direction of the company is going to be since its so new. Also, even though I'm the main dev for the project, I would still only consider myself a mid-level dev and am selling myself as such for the new job search. Just to add more detail, I'm not a partner or anything in the company and this was never discussed, so I just work on a W2 (with no benefits of course). I work at home so that makes it easier to leave, I guess, but I don't want to just screw the guy over but also don't want to be tied in for too long. Obviously I would plan to give 2 weeks notice at least, but should I give more? How should I bring up the subject because I know its going to be a touchy thing to bring up. Any advice is appreciated UPDATE: Thanks everyone for posting on this, I have now just completed the process of accepting an offer with a larger company and quitting the startup. I have given 2 weeks notice and have offered to make myself available after that if needed, basically its a really small company at this point so it would only be 1 dev that I would have to deal with... anyways, it looks like it may work out well as far as me maintaining a good relationship with the founder for future work together, I made it out to be more of a personal / lifestyle issue than about their flaws / shortcomings which definitely seems to help in leaving on a good note

    Read the article

  • Creating ip alias on bonded interface ie. bond0:1

    - by bobothechimp
    System: HP Proliant DL360 G5 running CentOS 5.4 Bonded interface is working fine for a long time. I just went to add an alias the way I always have on a regular interface, and on first check it works (pinging on the local box) but it is not accessable from outside (iptables is turned off). In addition with this setup the normal network response started to decline, hanging for around a minute before I could get a prompt on login. Here are my config files: [root network-scripts]# cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-bond0 DEVICE=bond0 BONDING_OPTS="mode=1 miimon=100" BOOTPROTO=none ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.11 USERCTL=no [root network-scripts]# cat ifcfg-bond0:1 DEVICE=bond0:1 BOOTPROTO=static ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.12 USERCTL=no any thoughts?

    Read the article

  • what to use instead of laptop-mode?

    - by playcat
    hello, i have ubuntu 10.10 64bit on hp 6735s (turion processor). it overheats, and i'm forced to use turion power control in order to keep core temperature to a reasonable level. one more measure that i use is putting my processors to conservative mode. that way, i'm perfectly happy with its performance, and heat is where it should be. however, after my latest upgrade, something happened - cores are back to ondemand by default, and i'm not sure if turionpowercontrol is working any more (ps axu | grep urion shows no process). in addition, i read somewhere that laptop-mode uses hdd spindown for preserving data/energy, and that hdds have only a limited amount of those spindowns, so laptop-mode usage can actually shorten the life of my hdd. i'm wondering if there is a good way to set my cores to automatically go to conservative mode? also, what's the good way to see what is the voltage my cores use? on windows i use cpuz tools. thx and sorry for the long explanation.

    Read the article

  • How to find malicious IPs?

    - by alfish
    Cacti shows irregular and pretty steady high bandwidth to my server (40x the normal) so I guess the server is udnder some sort of DDoS attack. The incoming bandwidth has not paralyzed my server, but of course consuming the bandwidth and affects performance so I am keen to figure out the possible culprits IPs add them to my deny list or otherwise counter them. When I run: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n I get a long list of IPs with up to 400 connections each. I checked the most numerous occurring IPs but they come from my CDN. So I am wondering what is the best way to help monitor the requests that each IP make in order to pinpoint the malicious ones. I am using Ubuntu server. Thanks

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Can I remotely monitor printf results of a C program?

    - by Mota
    I have a long running C program in which I've started from the Terminal.app using: gdb program_name gdb run I'm using many printf statements to monitor the progress of the program. Unfortunately, the screen of the computer has been frozen since yesterday, but the process is still running. My question is, can I watch the progress of the program (i.e. the results of the printf statements) remotely? I'm not that familiar with the terminal, but I know how to ssh and do some simple terminal tasks. The OS of the machine with the frozen screen is Mac OS 10.6.

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • Why does MOVE command in DOS treat wildcard patterns oddly in this case?

    - by Adisak
    I am using the "move" command with a wildcard pattern in the CMD prompt under Windows 7. In my source directory, I have the following files: movie1.avi movie1.avi_metadata movie2.avi movie2.avi_metadata If I type the command move source\*.avi dest it will move all four files even though I would expect it to only move the two *.avi files and not the *.avi_metadata files. As expected, move source\*.a dest and move source\*.av dest don't move any files. However when the length of the extension for the wildcard pattern is 3 characters, it will move all extensions that begin with those first three characters. Is this a bug in the "move" command or expected behavior and is it documented anywhere? Edit: John Watts notes that this is probably do to "short" filenames. Is it possible then to make commands in the CMD interpreter only operate on long filenames and to ignore short filenames?

    Read the article

  • stat and ls show wrong file size (terabytes wrong)

    - by WolleTD
    Ok, I have a bunch of vCard files, all about 200 to 300 Bytes in size. While trying to get them archived, I wondered why that takes so long and discovered that there is one file with a wrong size. Both ls and stat are showing a size of about 8.1 Terabytes. That's amazing because my SSD is only about 250 Gigabytes in size. There are some other files with wrong sizes, too, but this is clearly the biggest one. I already gave it a fsck, but there seem to be no errors in the (ext4) filesystem. How can I get rid of this wrong size? Thanks, Wolle

    Read the article

  • Ubuntu 10.04: boot error for custom compiled kernel - gave up waiting for root device

    - by atharva
    I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platform) and it is working fine. Now I have compiled kernel 2.6.37 downloaded from the kernel tree. I followed usual procedure of compiling kernel (make menuconfig, make, make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using update-grub command. update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different Ubuntu forums: disable uuid and point root=/dev/sda8 (sda8 is where my kernel image resides (both default kernel and compiled one) from /etc/default/grub compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution?

    Read the article

  • moving my site, IP change worries...

    - by Sherif Buzz
    Hi all, my site has outgrown the shared hosting account it's on and i've setup a VPS that i'll be moving to soon. I cannot keep the same IP between my new account and the old one and I'm a bit at loss as to how to minimize user downtime while the new IP is reflected in all DNS caches. Note I cannot have the site running on both accounts at the same time as it's a dating site and this would cause data inconsistency. Here's what i am planning to do : Put up a 'under maintenance' page on old host Get the site up and running on new host, and update domain to point to new host. Hope downtime isn't too long. Would it be a good idea to have a link on the page in (1) that opens the new site but using it's ip ? Or even redirect all requests at the old host, to the new one (again by ip) ? Any advice much appreciated.

    Read the article

  • Reinstall Windows on mass computers

    - by user791022
    At work there are about 40 DELL PC's - half of them are Windows Vista and other half are Windows 7. Some of the DELL PC's are 220s and 230s model. If I want to re-install Windows then I can use a Recovery/Recovery disk on any Dell PC's (220s or 230s). It take too long to complete the setup, for example - after re installing the Windows then I need to install the drivers, update windows, install IE9, create default users account and so on. It take about 3 hour to complete setup! I am looking for a solution Where I can create an Image of windows (with driver and completed setup) and then I use same Image it on any PC's at work? If one of the PC need reinstalling - I would like to reinstall via network (LAN) by grabbing the Image? Is there something that before booting into windows - computer check something via network which can download the Image?

    Read the article

  • Esxi with iSCSI SAN slows down with many multiple VMs running

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas). Those two machines are connected to each other with Gigabit ethernet, and a procurve switch in between. After a while, if I have many(4-5 or more) vms running, they start to get un-responsive (long delays before anything happens). We are trying to find the reason behind this. Today we looked at esxtop, and found that DAVG of that iSCSI LUN stays at 70-80. I read that +30 is critical! What could be causing those high response-times?

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Java and C# in web development [on hold]

    - by azalut
    I am wondering whether C# development(ASP.NET) is rather kind of "rapid development" or something "big" like JavaEE/Spring? We all know, that RoR or Django are really rapid-development frameworks - and so - is C# closer to Java "long-timed-development" or to frameworks like the two above - Django, RoR? I am, for now, an amateur Java programmer and sometimes I get annoyed with the amount of code that have to be written to create even a short CRUD app. We need a lot of skills to create at least a small app. I want some change, at least for some time and learn something new. I tried (just few hours) first: RoR, then Django and now I am writing in C#. It seems to be like Java but a little bit extended. In respect of future work as a professional coder - Is it profitable to know both competitive technologies like Java (and its frameworks) and C# with .NET(ASP.NET for example)? Maybe better choice is Python? Or just stop being stupid and still work with Java but with another framework(and master my Java skills) or JavaScript, jQuery to be better at web-development? Actually this question depends on your own opinions that is why I know that this question could be blocked by admins. But main question is in the top of the post I mean: is C# web-development rapid or closer to Java? I am afraid, that if I don't try, I will regret in the future, when I awake and think: oh my god, how could I not get familiar with (another_technology_or_language) Thanks for your attention :) ps I had asked the same question on stackoverflow, but it was hold because of being opinion based. Hope it fits here ;)

    Read the article

  • Where can I learn about managing domain names for my websites? [closed]

    - by Shahbaz
    [I originally asked this question on serverfault.com, where it was closed as 'out of scope.' Hopefully it is appropriate for this forum] I am a developer who doesn't understand how to effectively manage Internet domain names. Say I registered a name with namecheap and host a website on linode. Now what is an a-record? What is a name server and do I host it with namecheap of linode? Why would I pay amazon when others are free? Does any of this matter in terms of website latency or reliability? I feel like a script kiddy, copying and pasting others' and hoping it works. Is there a book or other resource that explains all this? I know amazon is full of books about DNS, but afaik they are about setting up DNA servers for local networks, not the Internet. p.s. To emphasize, I'm asking for books or long write-ups which explain this to technically competent people, who just haven't had to think about the role of commercial registrars, name servers, commercial hosts, commercial websites and how all parts play together on the real internet (not local networks).

    Read the article

  • Is it possible to dump the names of all the open files in notepad++ to a file?

    - by mark
    So, I dragged and dropped multiple files onto notepad++. The files came from different directories and were selected using different criteria. So, I have many files open in Notepad++. Now I need to have a list of all the open files in another file. Right now, my only option is to script the decisions used to guide me in selecting the files in the first place. Which is probably the best in the long term, but I wonder if there is a quicky one in Notepad++. Some plugin magic or whatever. Suggesting another free editor which has this function is a good option too (not that I am going to ditch notepad++, God forbid)

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >