Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 323/654 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Headset for phone calls in the datacenter

    - by Cakemox
    Datacenters are noisy places. On occasion it is necessary for an administrator or technician to troubleshoot a problem in the datacenter while on a phone call or conference call. Unfortunately, these can be long, drawn out conversations, depending on the issue at hand. Mobile phones are pretty lousy in these circumstances: the person in the datacenter can't easily hear over the noise, and the mic tends to make things unpleasant for the listeners. In-ear monitors make it easier for the person in the datacenter to hear, but don't do a whole lot for the people listening on the other end. What headset options are there for making phone calls in the noise of the datacenter less noticeable for all involved?

    Read the article

  • raid 5 creation (using mdadm) lots of read/writes on creation: is this normal?

    - by Gbrits
    I created a software raid 5 disk using: mdadm -C /dev/md2 -l5 -n4 /dev/sd[i-l] at the same time I'm using dstat to see io-activity: dstat -c -d -D total,sda1,md2,sdi,sdj,sdk,sdl -l -m -n and notice that disks sd[i-k] are all read from and sdl is written to. Now, I do understand that raid5 has to be configured, but it takes a really long time and all disks are clean & formatted (using xfs) so I figure there might be some kind of shortcut to skip (unnecessary? ) checking.. Is it? The creation is part of a time-critical nightly batch-process (run on amazon ec2) so it's not a one-time thing. Thanks, Geert-Jan

    Read the article

  • Split a table in Word without losing row title

    - by Shane Hsu
    Word has the feature to repeat title row of a table when a table is so long that it spans a bunch of pages. I need to categorize my data into several pages, and I did that by splitting the table and insert page split to put them all in a page of itself. So now I got several page of data, but only the first page has title row. Is there anyway else to do this beside manually adding the title row to all the other pages? Original data: _________________ | Cat. Data | | 1 * | | 1 * | | 1 * | | 1 * | | 1 * | | 1 * | | 2 * | | 2 * | | 2 * | | 2 * | | 3 * | |___3______*______| And then turn it into: _________________ | Cat. Data | | 1 * | | 1 * | | 1 * | | 1 * | | 1 * | |___1______*______| Next page _________________ | Cat. Data | | 2 * | | 2 * | | 2 * | |___2______*______| Next Page _________________ | Cat. Data | | 3 * | |___3______*______|

    Read the article

  • Best Practices PHP mvc routing

    - by dukeofweatherby
    I have a custom MVC framework that is in a constant state of evolution. There's a long standing debate with a co-worker how the routing should work. Considering the following directory structure: /core/Router.php /mvc/Controllers/{Public controllers} /mvc/Controllers/Private/{Controllers requiring valid user} /mvc/Controllers/CMS/{Controllers requiring valid user and specific roles} The question is: "Where should the current User's authentication be established: in the Router, when choosing which controller/directory to load, or in each Controller?" My argument is that when authenticating in the Router, an Error Controller is created instead of the requested Controller, informing you of your mishap; And the directory structure clearly indicates the authentication required. His argument is that a router should do routing and only routing. Leave it to the Controller to handle it on a case by case basis. This is more modular and allows more flexibility should changes need to be made by the router. PHP MVC - Custom Routing Mechanism alluded to it, but the topic was of a different nature. Alternative suggestions would be welcomed as well.

    Read the article

  • How are suspected DoS attacks handled by webservers?

    - by Jan Kuboschek
    I rent a server somewhere out in Canada or so that I'm using to host a website of mine. That website has close to 400,000 pages that I wanted to index today. For that, I wrote a crawler a while back (see JCrawler on Stackoverflow.com). Now, I'm greedy and didn't want it to take too long so I ran multiple threads resulting in some 60+ requests per second from my IP. A couple minutes later, my server locked me out. I can still FTP into it, but I can't HTTP it. As server administrator or user, do you have any idea how servers usually handle these situations? Is it common to place a permanent or temporary ban on the IP or what is typically done? Naturally, I'll re-run my software with fewer requests once I'm back on.

    Read the article

  • How can I make sharepoint use a small URL (e.g. http://internal.com instead of http://internal.com/sites/osfc/Pages/Default.aspx)

    - by StevenB
    Hi all, I'm new to sharepoint 2007, currently the home page is htp://internal.com/sites/osfc/Pages/Default.aspx but I would like to use htp://internal.com or have htp://internal.com redirect to the long URL. How can I do this? I thought of using a 301 redirect but the permissions on the site in IIS don't allow users to view files placed in the root and I don't want to mess with the permissions. Currently if I visit http://internal.com I see a sharepouint Access Denied page (htp://internal.com/_layouts/AccessDenied.aspx?Source=%2f). Note: I've used htp:// above as serverfault doesn't allow more than 1 https:// link. Many thanks Steven

    Read the article

  • How can I convert an ordinary text file to a .csv file, and import it to Excel?

    - by Xavierjazz
    I have a group of names and addresses that I would like to import into Outlook. At the moment I have imported them into Excel, but all names and addresses are in one long entry. All are already separated by a comma. How can I get Excel to select each "value" and move it to a separate cell? Edit: I had already tried taking a text file and saving it as a .csv file. However, all contacts load into a single cell. I am using Excel 2003. Thanks.

    Read the article

  • An experiment: unlimited free trial

    - by Alex Davies
    The .NET Demon team have just implemented an experiment that is quite a break from Red Gate’s normal business model. Instead of the tool expiring after the trial period, it now continues to work, but with a new message that appears after the tool has saved you a certain amount of time. The rationale is that a user that stops using .NET Demon because the trial expired isn’t doing anyone any good. We’d much rather people continue using it forever, as long as everyone that finds it useful and can afford it still pays for it. Hopefully the message appearing is annoying enough to achieve that, but not for people to uninstall it. It’s true that many companies have tried it before with mixed results, but we have a secret weapon. The perfect nag message? The neat thing for .NET Demon is that we can easily measure exactly how much time .NET Demon has saved you, in terms of unnecessary project builds that Visual Studio would have done. When you press F5, the message shows you the time saved, and then makes you wait a shorter time before starting your application. Confronted with the truth about how amazing .NET Demon is, who can do anything but buy it? The real secret though, is that while you wait, .NET Demon gives you entertainment, in the form of a picture of a cute kitten. I’ve only had time to embed one kitten so far, but the eventual aim is for a random different kitten to appear each time. The psychological health benefits of a dose of kittens in the daily life of the developer are obvious. My only concern is that people will complain after paying for .NET Demon that the kittens are gone.

    Read the article

  • Need to transfer large video from Camera!-app to computer

    - by Henrik Söderlund
    I have a jailbroken iPhone 4S and am trying to transfer a 25minute long HD-video that I have recorded through SmugMug's Camera-Awesome (Camera!) app. Once recorded in the app, it stays within that app's interface until you choose to save it via the app onto the camera roll. When trying this option, the app just stalls, even when leaving it for an hour plus. I assume the video is too large to copy. I am trying the iExplorer app on my MacBookAir. I can find the Documents folder inside the Camera!-folder. But as soon as I access it to view the contents, it simply stalls the app completely. Probably after trying to read the enormous video. Is there a clever way to transfer this file onto the computer? I can use iFile on the iPhone to transfer through wifi, but I don't know the Camera! app's Documents folder location on the file system.

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Neverending issues with grub (ubuntu 14.04 on ASUS with Win8 dual boot)

    - by Mariana
    This is the most frustrating issue I have ever run into using Ubuntu and Windows in the same machine. I have an ASUS K46CB, 6GB RAM and preinstalled Windows 8.1 64-bits. I have successfully installed Ubuntu 14.04 LTS, also 64-bits. To do so,I followed this tutorial whenever possible. I only failed on the disable secure boot part: there is no 'Secure-boot' or even UEFI mention in my BIOS! Screenshots from other BIOS of the same model show the option under Boot, but in mine there is absolutely none. Because of this, I cannot boot into Ubuntu. The computer loads straight into Windows. I tried running boot repair, but got an error (i can show the log, but it's pretty long). Does anyone know how to fix this issue? UPDATE I reinstalled Ubuntu. Same problem, goes straight to Window. Boot-Repair informs me that i am using Windows in Legacy mode. It excecuted with no errors this time, but after restarting GRUB was still missing. I can't turn off Secure Boot yet. UPDATE I tried using Boot Repair to install grub on a boot-grub 1mb partition. Still boots straight to windows. I feel like punching something

    Read the article

  • Advanced TSQL training

    - by Dave Ballantyne
    Over the past few years, Ive had it on my to do list to write and deliver and full-scale SQLServer training course and not just an hour long bite size session at user groups and conferences.  To me, SQLServer development is not just knowing and remembering the syntax of commands.  Sometimes I semi-jest that i have “Written a merge statement without looking up the syntax”, but I know from my interactions on and off line that I am far from alone in this.  In any case we have an awesome tool in the internet which is great at looking things up. When developing SQL Server based solutions,  of more importance is knowing the internals of the engine.  SQL Server is a complex piece of software and we need to be able to understand to a fairly low level ( you can always dive deeper ) the choices that it makes and why it makes them in order to deliver performant, reliable, predictable and scalable systems to our customers and end-users. This is the view i shall be taking over two days in March (19th and 20th) in London and ,TBH, one I dont see taken often enough. Early bird discounts are available until 31Dec. Full details of the course and a high level view of the bullet points we shall be covering are available at the Technitrain site ( http://tinyurl.com/TSQLTraining )

    Read the article

  • Proving file creation dates

    - by Nils Munch
    In a weird case surrounding copyrights of a software system I have developed, I use the fact that I have all the source files of the system in question, created long before I joined the company that claims to own the system. The company being sued by yours truely says that I have simply manipulated to files to appear to be from that date. Is it even possible to fake or manipulate creation dates ? And if so, how can I "prove" that the files really are that old ? Luckily, I stored my project on GitHub, whick confirmed the fact that the files are from that era, but that is besides the point. I run purely Apple OS X.

    Read the article

  • Reinstall Windows on mass computers

    - by user791022
    At work there are about 40 DELL PC's - half of them are Windows Vista and other half are Windows 7. Some of the DELL PC's are 220s and 230s model. If I want to re-install Windows then I can use a Recovery/Recovery disk on any Dell PC's (220s or 230s). It take too long to complete the setup, for example - after re installing the Windows then I need to install the drivers, update windows, install IE9, create default users account and so on. It take about 3 hour to complete setup! I am looking for a solution Where I can create an Image of windows (with driver and completed setup) and then I use same Image it on any PC's at work? If one of the PC need reinstalling - I would like to reinstall via network (LAN) by grabbing the Image? Is there something that before booting into windows - computer check something via network which can download the Image?

    Read the article

  • Esxi with iSCSI SAN slows down with many multiple VMs running

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas). Those two machines are connected to each other with Gigabit ethernet, and a procurve switch in between. After a while, if I have many(4-5 or more) vms running, they start to get un-responsive (long delays before anything happens). We are trying to find the reason behind this. Today we looked at esxtop, and found that DAVG of that iSCSI LUN stays at 70-80. I read that +30 is critical! What could be causing those high response-times?

    Read the article

  • More productive alone than in a team?

    - by Furry
    If I work alone, I used to be superproductive, if I want to be. Running prototypes within a day, something that you can deploy and use within a few days. Not perfect, but good enough. I also had this experience a few times when working directly with someone else. Everybody could do the whole thing, but it was more fun not to do it alone and also quicker. The right two people can take an admittedly not too large project onto new levels. Now at work we have a seven person team and I do not feel nearly as productive. Not even nearly. Certain stuff needs to be checked against something else, which then needs to also take care of some new requirement, which just came in three days ago. All sorts of stuff, mostly important, but often just a technical debt from long ago or misconception or different vocabulary for the same thing or sometimes just a not too technically thought out great idea from someone who wants to have their say, and so on. Digging down the rabbit hole, I think to myself, I could do larger portions of this work faster alone (and somewhat better, too), but it's not my responsibility (someone else gets paid for that), so by design I should not care. But I do, because certain things go hand in hand (as you may experience it, when you done sideprojects on your own). I know this is something Fred Brooks has written about, but still, what's your strategy for staying as productive as you know you could be in the cubicle? Or did you quit for some related reason; and if so where did you go?

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • How do programers balance the upper or lower case style to name file or folder between work and life?

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

  • Java and C# in web development [on hold]

    - by azalut
    I am wondering whether C# development(ASP.NET) is rather kind of "rapid development" or something "big" like JavaEE/Spring? We all know, that RoR or Django are really rapid-development frameworks - and so - is C# closer to Java "long-timed-development" or to frameworks like the two above - Django, RoR? I am, for now, an amateur Java programmer and sometimes I get annoyed with the amount of code that have to be written to create even a short CRUD app. We need a lot of skills to create at least a small app. I want some change, at least for some time and learn something new. I tried (just few hours) first: RoR, then Django and now I am writing in C#. It seems to be like Java but a little bit extended. In respect of future work as a professional coder - Is it profitable to know both competitive technologies like Java (and its frameworks) and C# with .NET(ASP.NET for example)? Maybe better choice is Python? Or just stop being stupid and still work with Java but with another framework(and master my Java skills) or JavaScript, jQuery to be better at web-development? Actually this question depends on your own opinions that is why I know that this question could be blocked by admins. But main question is in the top of the post I mean: is C# web-development rapid or closer to Java? I am afraid, that if I don't try, I will regret in the future, when I awake and think: oh my god, how could I not get familiar with (another_technology_or_language) Thanks for your attention :) ps I had asked the same question on stackoverflow, but it was hold because of being opinion based. Hope it fits here ;)

    Read the article

  • Unity gizmos vs. referenced game objects

    - by DuckMaestro
    I'm designing a Unity script that I intend to be highly reusable and as easy as possible to setup within the editor. To this end, a number of properties of this script really need some kind of visual representation on screen. It is an unresolved question to me whether the design of the script should require references to placeholder game objects, OR just Vector3's and float's that have associated gizmos drawn for them. Normally a gizmo would be a natural choice, except that Unity gizmos are not directly manipulable (as far as I can tell). Because of this shortcoming I'm having to consider whether depending on references to placeholder game objects is a more designer-friendly approach ultimately, in spite of the extra setup required, and that it might be counter-intuitive when the placeholder game objects disappear at run-time (which my script would do). Is there a community standard or preference here in this case? Can a Unity-experienced game programmer / designer speak to which approach they feel is more intuitive or more convenient to setup, when using a 3rd party script? Or is this just splitting hairs as long as I ship an example prefab with my script?

    Read the article

  • Is there a product planning tool that has these specific features? [closed]

    - by acjohnson55
    I am working on a web startup in the early stages, and we are struggling a bit to manage the scope and scheduling of our product. We have loads of high-level features in the pipeline, but we need a good way of scheduling them for release iterations and breaking them into actual tasks that can be scheduled (that could be a separate tool, but integration would be preferred). I would say that our product can be pretty cleanly divided into "aspects", and we want to be able to separate features by the aspect to which they apply. Perhaps most importantly, it should be really simple to create and move features between target release points. We don't have physical space for a war room type setup, so whatever we settle upon should ideally have a cloud-type web interface. Right now, we're using Excel to make a grid of product aspects vs. target releases, and we store features at the intersections. But this is not providing a good way of indexing tasks to those features or being able to move them around. I would much rather have something that automates the grid overview. I'm less interested in something that helps with low-level scheduling than I am in something that is good at organizing the product plan at the long-term, high-level view. Is there a product planning tool out there that matches these specifications?

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • iTunes want to remove my existing apps on iPad

    - by Michael
    Here is the situation. I connected my iPad to some new PC, Synced, then ticking Sync Apps from Devices-Apps will give me warning that all existing apps on my iPad will be replaced with those from Library-Apps. But in Library I have just couple old apps, which long time ago I used. So how I can sync the library in iTunes with my existing apps from iPad? EDIT: I have tried to click on Transfer Purchases, but not all of the items went to the library, just few of them.

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • Is it possible to extend a 504 timeout in nginx on a per location basis

    - by codecowboy
    Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM? location /myurlsegment/ { client_body_timeout 1000000; send_timeout 1000000; fastcgi_read_timeout 1000000; } This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0)) I don't want to set a global timeout for all scripts.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >