Search Results

Search found 74197 results on 2968 pages for 'part time'.

Page 396/2968 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Setting up a DNS server for a 3rd level domain

    - by user45339
    If I would like a client to use HIS DNS for a 3rd level domain name (ie test.domain.com), how would I be able to do that? So I have domain servers ns1.domain.com + ns2.domain.com for domain.com but now I want ns1.rabbit.com and ns2.rabbit.com for test.domain.com. How can this be done? I know it's possible because I saw it at some providers. Then, as a second part (and related) part; how to set up a WHOIS for that test.domain.com? So that, if you query my server for the information about test.domain.com, it'll be different than the info on domain.com? Thanks

    Read the article

  • Cross-Cultural Design (great video from HFI) - #usableapps #UX #L10n

    - by ultan o'broin
    Great video from HFI Animate, featuring user-centered design for emerging markets called Cross Cultural Design: Getting It Right the First Time. Cross Cultural Design: Getting It Right the First Time Apala Lahiri Chavan talks about the issues involved in designing solutions for Africa, India, China and more markets! Design for the local customer's ecosystem - and their feelings! Timely reminder of the important of global and local research in UX!

    Read the article

  • On ESXi, guest machines hang for significant intervals compared to real machines. How can I fix this?

    - by Tarbox
    This is ESXi version 5.0.0. We plan on upgrading to 5.5 eventually. I have four code profiles, two taken on a real, unvirtualized machine, two taken on a virtual machine. Ordering the list of subroutines by time spent in each one, the two real profiles are practically identical. The two virtual profiles are different from each other and from the real profiles: a subset of subroutines are taking a lot more time on the virtual machines, and the subset is different for each run. The two virtual profiles take a similar amount of time, which is 3 times the amount of time the real profiles take. This gross "how long does it take?" result is consistent after hundreds of tests across three different virtual machines on two different host machines -- the virtual machine is just slower. I've only the code profiling on the four, however. Here's the most guilty set of lines: This is the real machine: 8µs $text = '' unless defined $text; 1.48ms foreach ( split( "\n", $text ) ) { This is the first run on the virtual machine: 20.1ms $text = '' unless defined $text; 1.49ms foreach ( split( "\n", $text ) ) { This is the second run on the virtual machine: 6µs $text = '' unless defined $text; 21.9ms foreach ( split( "\n", $text ) ) { My WAG is that the VM is swapping out the thread and then swapping it back in, destroying some level of cache in the process, but these code profiles were taken when the vm in question was the only active vm on the host, so... what? What does that mean? The guest itself is under light load, this is a latency problem for my users rather than throughput. The host is also under a light load, if I knew what resources to assign where, I could do it without worrying about the cost. I've attempted to lock memory, reserve cpu, assign a restrictive affinity, and disable hyperthread sharing. They don't help, it still takes the VM 2-4x the amount of time to do the same thing as the real machine. The host the tests were run on is 6x2.50GHz, Intel Xeon E5-26400 w/ 16gigs of ram. The guest exhibits the same performance under a wide combination of settings. The real machine is 4x2.13GHz, Xeon E5506 w/ 2 gigs of ram. Thank you for all advice.

    Read the article

  • Search Engine Optimization - The Five Factors Search Engines Use to Rank Websites

    At the end of the day, SEO requires a lot of extremely specialized knowledge, time, and attention - on an ongoing basis. But because a SEO effort can give a website's rankings a dramatic boost in the search results - and there is a significant connection between search engine ranking and search referral traffic - it is well worth doing whatever it takes to develop the knowledge and to be able to invest the time and attention.

    Read the article

  • Reflecting on week long Training with SQLSkills

    - by NeilHambly
    Time for a quick reflection on my 5-day's training with SQLSkills, they have 4 weeks in their immersion training program, this was week 1: Internals & Performance held @ large Heathrow Hotel http://www.sqlskills.com/T_ImmersionInternalsDesign.asp So was the Course worth the Time and Money... undoubtedly, I believe we had a large number of the people there also self-funding along with the lucky corporate sponsored ones. It was akin to doing say the "London marathon" in that you know...(read more)

    Read the article

  • Running Tomcat 7 and Apache 2 on the same server

    - by Thorn
    Part of my site needs to run over HTTPS and I'm creating a sub-domain for that part. I have apache httpd 2 AND Tomcat 7 running on the same server with the same IP, Apache is on port 80 of course, while Tomcat is running on port 8080. Right now I am doing domain forwarding for requests that need to run off tomcat. For example, mathteamhosting.com/mathApp can forward to mathteamhosting.com:8080/mathApp. I would like to have Tomcat handle the https requests for that subdomain. I don't think this forwarding technique can work in this case. How do I set that up so that Tomcat receives the requests on port 443 while apache handles port 80. To be more specific: http://proctinator.com == request goes to Apache web server https://private.proctinator.com == request goes to Apache web server

    Read the article

  • As an Indie iOS Developer, is it more profitable to market apps heavily or focus on publishing more apps? [closed]

    - by user69860
    At first I thought that if I made a bunch of $0.99 apps that were all pretty nice that they would eventually start to make me some decent passive income. However, after publishing five $0.99 applications in the the Apple App Store, I'm finding that I make around $5/day, which is basically nothing. Should I invest time into creating an even better app, hire a designer, spend money on PR and marketing, and then keep spending time updating/managing that app? Or should I continue to produce more applications solo?

    Read the article

  • Working with Windows Forms CheckBox Control using C#

    A CheckBox control allows users to select a single or multiple options from a list of options. In this article, I will discuss how to create a CheckBox control in Windows Forms at design-time as well as run-time. After that, I will continue discussing various properties and methods available for the CheckBox control.

    Read the article

  • ExplorerCanvas and JQuery

    - by PhubarBaz
    I am working on a Javascript app (CloudGraph) that uses HTML5 canvas and JQuery. I'm using ExplorerCanvas to support canvas in IE. I recently came across an interesting problem. What I was trying to do is restore the user's settings when the page is loaded. I read some information from a cookie that I set the last time the user accessed the application. One of these settings is the size of the canvas. I decided that the best place to do this would be when the document is ready using JQuery $(document).ready(). This worked fine in browsers that natively support the canvas element. But in IE it kept getting errors the first time I would hit the page. It seemed that the excanvas element wasn't initialized yet because I was getting null reference and unknown properties errors. If I refreshed the page the errors went away but the resized canvas wasn't drawing on the entire area of the canvas. It was like the clipping rectangle was still set to the default canvas size. I found that the canvas element when using excanvas has a div child element which is where the actual drawing takes place. When I changed the width and height of the canvas element in document.ready it didn't change the width and height of the child div. Initially my solution was to also change the div element when changing the canvas element and that worked. But then I realized that having to refresh the page every time I started the app in IE really sucked. That wouldn't be acceptable for users. Since it seemed like the canvas wasn't getting initialized before I was trying to use it I decided to try to initialize my app at a different time. I figured the next best place was in the onload event. Sure enough, moving my initialization to onload fixed all of the problems. So, it looks like the canvas shouldn't be manipulated until the onload event when using ExplorerCanvas. There might be ways to do it when the document is ready. I found some posts on initializing excanvas manually, but for me waiting until onload worked just fine.

    Read the article

  • Master SEO Techniques in 7 Easy Steps

    Learning SEO may need a lot of time and effort, but it can definitely be done. If you'd like to learn how to optimise your website for the search engines, you'll have to invest a chunk of your time and energy to master SEO techniques.

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • Can Anything be Done to Make Improv (a 1993 Win 3.1 App) handle larger Files?

    - by user75185
    My very favorite spradsheet is Improv, a 1993 Windows 3.1 application. It still puts Excel to shame for building spreadsheets and writing formulas. The only problem is because Improv was written when 1 Meg of RAM was state of the art, it becomes unstable when working with larger spreadsheets and often crashes and/or corrupts the data file. I am working on a project that greatly exceeds Improv's limits. Although it will ultimately require more robust databasing capability, I could save a lot of critical time if I could delay that headache and continue working in Improv for now. To that end, I moved to the only product I could find that comes close, Quantrix, which is nothing more than Improv updated to handle large spreadsheets and utilize today's technologies. The problems with Quantrix are its speed (significantly slower than Improv) and its $1000 price (which I cannot afford). I have already had 3 15 day extensions after the initial 30 day trial, so my time to use Quantrix as a bridge is at its end. Searches for Improv over the years have gotten me nowhere and, not surprisingly after reading some posts on this site, I got nothing for the money and time invested to find a programmer to write code to "fix" this problem. Improv is freely available as "abandonware" at http://vetusware.com/download/LotusImprov2.1/?id=5797 , and the best background info can be found on Wikipedia and at "Moose's Greatest Software Products of All Time - Lotus Improv" http://moosevalley.fhost.com.au/mooses_review_page_lotus_improv.html It is critically urgent for me to focus on analyzing the data asap. Working in a stable Improv would, without question, be the fastest route. To that end, I am looking for answers to the following questions and anything else that might be helpful: 1) Is it lawful to hire someone to fix Improv for my own use? If so, 2) About how much should it cost? 3) About how long should it take? 4) What skills should I be looking for &/or how should a post be worded? 5) Is there a niche site where it should it be posted? 6) What questions can I ask to quickly screen candidates? Since I am not a programmer, I need questions the answers to which leave no room to confuse me, whether intentional or not. For example, what tools or players should someone with an acceptable competency level have knowledge of?

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • How to Recycle Your Website Content (And Why)

    If you use article marketing to promote your internet business (and I certainly hope you do) why would you want to spend the time and effort in writing an article then only use it once, when you can use it multiple times? We recycle lots of things these days because it's environmentally friendly. I recycle my articles because its time and resource friendly!

    Read the article

  • Most efficient way to store this collection of moduli and remainders?

    - by Bryan
    I have a huge collection of different moduli and associated with each modulus a fairly large list of remainders. I want to store these values so that I can efficiently determine whether an integer is equivalent to any one of the remainders with respect to any of the moduli (it doesn't matter which, I just want a true/false return). I thought about storing these values as a linked-list of balanced binary trees, but I was wondering if there is a better way? EDIT Perhaps a little more detail would be helpful. As for the size of this structure, it will be holding about 10s of thousands of (prime-1) moduli and associated to each modulus will be a variable amount of remainders. Most moduli will only have one or two remainders associated to it, but a very rare few will have a couple hundred associated to it. This is part of a larger program which handles numbers with a couple thousand (decimal) digits. This program will benefit more from this table being as large as possible and being able to be searched quickly. Here's a small part of the dataset where the moduli are in parentheses and the remainders are comma separated: (46) k = 20 (58) k = 15, 44 (70) k = 57 (102) k = 36, 87 (106) k = 66 (156) k = 20, 59, 98, 137 (190) k = 11, 30, 68, 87, 125, 144, 182 (430) k = 234 (520) k = 152, 282 (576) k = 2, 11, 20, 29, 38, 47, 56, 65, 74, ...(add 9 each time), 569 I had said that the moduli were prime, but I was wrong they are each one below a prime.

    Read the article

  • Red Hat Kickstart: How do I Prevent partitioning?

    - by frio
    Hey all, I'm currently working on a new virtualisation setup using Xen, and Centos for my workplace. We intend to deploy the domUs into LVM volumes. Currently, the only thing preventing this from working as smoothly as we'd like is the Kickstart script's insistence on partitioning. This is the relevant part from our current KS template (which I've been messing with): # Partitioning clearpart --all --initlabel --drives=xvda part / --size=0 --grow --ondisk=xvda --fstype=ext3 This sets up a single partition and installs to it - which would be fine, but I'd prefer if there were no partitions, and installed directly to the existing LVM (so that we could then mount the LVM from the dom0 for backup and maintenance purposes). It's possible I'm doing something wrong, and should be exporting the volume as xvda1 rather than xvda - which I'm more than happy to amend - but I'm still not sure how I'd navigate the Kickstart! I'd really appreciate any help :). Cheers in advance!

    Read the article

  • Win Server 2008 R2 - Mapped shared folder hanging?

    - by M-Tech
    I have recently built a windows 2008 server R2 machine. This is purely for file server purposes and is very much a basic build. All windows updates installed and part of domain. I have setup a shared folder on the C:Drive and added permissions for domain users as co-owners. The client machines run XP SP3 and are part of the domain also. We have a few servers running the same setups on a few of our sites but this one is particular crashes users machines (explorer.exe hangs for at least a few mins) when attempting to access the shared folder. I have turned off the option on the network card for power save aswell still no change. Any help with this is very much appreciated and i look forward to hearing from you ;)

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • rsync assigns deny permission

    - by user773478
    Currently a script is used to copy files using rsync (version 2.6.9 protocol version 29) from Linux/Unix servers to W2K3 server using very basic command such as "rsync -v source_server::share_name/file_name /cygdrive///file_name" The script further makes copy of this downloaded file for other purposes. This is part of a larger middleware that is being moved to new hardware on W2K8R2 Second part of making copy of the file does not work using more recent rsync client version 3.0.7 protocol version 30 (shows up as cwRsync in add/remove programs) Reason being rsync assigns special permissions to file that includes deny. The user (service account) which downloads the file is in local admin group. The file can be copied elsewhere using rsync. It can be deleted. But cannot be opened or copied locally by same user as deny permission supersedes.

    Read the article

  • Grub2 cannot detect Windows 8

    - by MetaChrome
    At installation it did not detect Windows. I mounted the Windows partition and have run os-prober with no results returning. I am able to boot Windows or Ubuntu by specifying the partition in UEFI boot order. The bios does not appear to allow booting with legacy. SecureBoot is on. This is with Ubuntu 12.04 LTS on a Inspiron 15. Here is the gdisk: 1 2048 1026047 500.0 MiB EF00 EFI system partition 2 1026048 1107967 40.0 MiB FFFF Basic data partition 3 1107968 1370111 128.0 MiB 0C01 Microsoft reserved part 4 1370112 2394111 500.0 MiB 2700 Basic data partition 5 2394112 544743423 258.6 GiB 0700 Basic data partition 6 606183424 625140399 9.0 GiB 2700 Microsoft recovery part 7 544743424 545230847 238.0 MiB 0700 (/boot) 8 545230848 556949503 5.6 GiB 8200 (swap) 9 556949504 606181375 23.5 GiB 0700 (/) When installing ubuntu, I believe I specified that the bootloader be installed on /dev/sda. I added the following to /etc/grub.d/40_custom but booting ubuntu did not offer a grub menu: menuentry "Windows 8" { set root = "(hd0,4)" chainloader +1 } When booting I think I see "EFI Disk error" flash very quickly before Ubuntu starts booting.

    Read the article

  • How can I use cron-apt to download and install updates between midnight and 5am?

    - by rudivonstaden
    I have capped data which is essentially free between midnight and 5am. As a result, I would like to set Ubuntu to automatically download updates over that time. It seems that cron-apt is what I need, but the documentation and syntax is sketchy and unintuitive. Can anyone tell me how to use it to schedule downloads? It can install the updates at the same time as far as I'm concerned, but that's not such a big issue - I can run those at a later stage.

    Read the article

  • Total Solar Eclipse 13/November/2012 - update

    - by TATWORTH
    Panasonic Eclipse Live will start their broadcast at 18:30 UT tonight at https://www.facebook.com/PanasonicEclipseLive/app_435671416492320Alternative URLs are http://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-1 and http://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-2/collection/744e86aa753e(The start time is 04:30 by local Australian time and will be the 14/November for them.)

    Read the article

  • How to read iptables -L output?

    - by skrebbel
    I'm rather new to iptables, and I'm trying to understand its output. I tried to RTFM, but to no avail when it comes to little details like these. When iptables -vnL gives me a line such as: Chain INPUT (policy DROP 2199 packets, 304K bytes) I understand the first part: on incoming data, if the list below this line does not provide any exceptions, then the default policy is to DROP incoming packets. But what does the 2199 packets, 304K bytes part mean? Is that all the packets that were dropped? Is there any way to find out which packets that were, and where they came from? Thanks!

    Read the article

  • Stairway to SQL Server Agent: Step 1: Setup and Overview

    SQL Server Agent is a Microsoft Windows service that allows a DBA to automate administrative tasks. SQL Server Agent can run jobs, monitor SQL Server, and process alerts. The SQL Server Agent service must be running before any jobs scheduled to execute automatically can be run Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >