Search Results

Search found 4468 results on 179 pages for 'zone transfer'.

Page 68/179 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • How to download large files when the download size is restricted ?

    - by Rahul
    ? In my office, the network admin has restricted the download limit to a size of 1.8MB for any file. This is for sub ordinates accounts only. But for my manager's PC, there are no restrictions. Is there any way to download files from my PC by using my managers' ip address. I just tried using his ip on my pc but, had the same problem. ? Earlier I was given access to our Linux server from my pc using putty. Then I used to download large files on to the server and then transfer from server to my machine using fire ftp. This transfer worked perfectly fine. But, now I don't have any access to the server. So can I be able to download large files using fire ftp from my own PC ? I'm using Windows XP machine. Please suggest a solution by any possible combination. Thanks.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • SQL Server 2005 to 2008 DB attach elp please!

    - by Brandon
    I have SQL Server 2005 Standard on my personal machine. I created a very big DB about 21 gb. I made a backup and transferred the .bak file via an ftp program to my dedicated server. I have SQL Server 2008 Enterprise Edition on my dedicated server. I tried restore the transferred .bak file but got an error. I posted the error on here and was told the database is corrupt. How? I don't know. The connection was not interrupted during the ftp transfer. The DB works on my own machine. So then I detached the db on my own machine and transferred the mdf and ldf file to my dedicated server through ftp again and again there were not interruptions. Now I try to attach the db and get this error: The header for file 'DB.mdf' is not a valid database file header. The FILE SIZE property is incorrect. (Microsoft SQL Server, Error: 5172) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.1442&EvtSrc=MSSQLServer&EvtID=5172&LinkId=20476 I already wasted 21 gb transferring the .bak file. Now I used another 21 to transfer mdf and additional ldf file. Please tell me there's a solution. The db can detach and attach fine on my machine in sql server 2005 but not SQL server 2008 on my server.

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • How is network mounted software executed?

    - by CptSupermrkt
    I would like to understand how network mounted software works. For example, at my place of work, we have a software server. Each client machine (hundreds of them) automatically mounts directories from the software server on boot. For example, a program like Matlab is installed just once on the software server, but each client machine can start up an instance of Matlab. What is going on under the hood? Let's say I run /opt/bin/matlab and /opt/ is mounted from the software server, what happens when I press Enter to execute matlab on a client machine? The process is on the client machine, and I've already narrowed down that there isn't any implicit or hidden file transfer (i.e. copying matlab to my machine temporarily for that session) by running matlab on a computer with nearly zero disk space (i.e. not enough room to transfer). Since Matlab was installed on the server, how is my client computer executing it? What mechanism is controlling this? What is happening behind the scenes?

    Read the article

  • Using the same Windows 8 Upgrade installer on multiple PCs

    - by Karan
    As per this article: You may transfer the software to another computer that belongs to you. … You may not transfer the software to share licenses between computers. But what if I have a bunch of PCs with a mix of XP/Vista/Windows 7? Can I purchase either the Windows 8 Pro Upgrade $40 (download only) or $70 (DVD) version (both of which come without a key) only once and use it to upgrade all the PCs? Since I'm not sharing the license and each PC has its own valid genuine license, it should be allowed, right, or is it illegal? Even if they want people to shell out $40/$70 for each PC, how would they enforce the use of the installer/media on only one PC each? EDIT: I have been given to believe by a source that the installer will only check for the previous OS' key, which is what is confusing me (I have never purchased an upgrade version before this, only full retail or pre-installed versions). Is this true or will I need to enter two keys to make the upgrade work, one for the previous version and then one for Windows 8? If the latter is the case, then the issue is solved since obviously the same Windows 8 key will not be valid for multiple PCs.

    Read the article

  • RAID 6 that can read with least 1000 Mbit/s?

    - by Diblo Dk
    I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time. I have configured it with four 2 TByte drives and RAID 6. It have 256 MByt ram and transfer rate of 300 Mbps. The benchmark test showed: Min read rate: 136.3 Mbps Max read rate: 329,6 Mbps Avg read rate: 242,2 Mbps What could I had done to get at least 1000 Mbps? Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps) How would a RAID 10 had performed compared to RAID 6 or 5? Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller? UPDATE: The drives is SATA III 6 Gbps. http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

    Read the article

  • Why is my rsync so slow compared to pure cp or even scp?

    - by nfm
    I'm transfering the files from Linux to Windows 7 via a mounted share (the share is mounted from Windows on Linux).. I'm copying lots of data (i.e. nearly a TB) from the old to the new machine within my LAN. I'm unfortunate enough already that I only have 100MBit. Naturally I blindly used rsync but already wondered after a day why it feels so slow. Enabling the progress meter showed my a transfer rate of about 2MBit/s . So I took a reasonable big file (800MB) and tracked the transfer timing: cp : 05:33 scp (*): 06:33 rsync : 21:51 *) scp via localhost to the same Linux machine directly onto the share; completely useless but provided a progress meter The tests were as simple as (cp|scp|rsync) <source> <destination> No special arguments except host/port for scp. I even tried the -W switch for rsync but cancelled after ten minutes. rsync is 3.0.3 running on Lenny. To be able to interrupt the copy process anytime and resume lead me to rsync, but now I think I seriously need to reconsider this requirement. How's such a big difference possible?

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by user28146
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • why does Integrated Windows Authentication fail when clients access off the network

    - by Bryan
    My background is not with web applications so this problem is hard for me to explain easily. First I'll try to describe the setup. Client setup:-Only browser that is effected is IE 6-8 (Firefox, chrome, opera, and safari all work fine) -A user will try to access our web application from a company laptop that is not connected to our network. -This machine will be a member of our workgroup and have the company DNS listed as a trusted intranet site. (to which the application in question would be a member) -The security logon mode is set to Automatic Logon only in intranet zone only, and IWA authentication is enabled on the clients browser.Server setup:-Windows server 2003 fp2-The application will first redirect to an Authorization asp page which has anonymous access disabled and IWA enabled in IIS.what should happen is that, since the client is not currently on the network, when this page is called it should prompt the user for network credentials. But with IE, instead of prompting, the user gets a page cannot be displayed error because the IIS manager is denying access to the asp page. If the company DNS is removed from the trusted intranet site list then it prompts correctly but disables single sign on the next time that computer is connected to the network or vpn. My assumption is that since IE uses IWA and the site is listed as an internal site, when no network is found IE just sends nulls to the server attempting to authenticate which is swiftly punted back. Other browsers do not have security zones so when network credentials are not present the server prompts for them. Is there a way to get around this so that our clients can keep the company DNS in the intranet zone but still have the server prompt for credentials when not on the network? Any attempt to allow for anonymous access on the asp page, as far as I know, will cause AUTH_USER to return null and again break SSO. I realize this is slightly rambling so I will do my best to clarify and questions you guys might have. Thanks in advance.

    Read the article

  • How do I debug a crash when I run my garbage-collected app in Rosetta?

    - by Rob Keniger
    I have a Universal app which is targeting 10.5 and which uses garbage collection. I am building for ppc, i386 and x86_64. I don't have access to a physical PowerPC machine so I am trying to use Rosetta to confirm that the PowerPC portion of the app works correctly. However, as soon as the app is launched in Rosetta it immediately crashes with the following crash log: Process: FooApp [91567] Path: /Users/rob/Development/src/FooApp/build/Release 64-bit/FooApp.app/Contents/MacOS/FooApp Identifier: com.companyX.FooApp Version: 0.9 (build d540e05) (2) Code Type: PPC (Translated) Parent Process: launchd [708] Date/Time: 2010-04-09 18:32:23.962 +1000 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_CRASH (SIGTRAP) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 ...snip non-relevant threads... Thread 5 Crashed: 0 libSystem.B.dylib 0x8023656a __pthread_kill + 10 1 libSystem.B.dylib 0x80235e17 pthread_kill + 95 2 com.companyX.FooApp 0xb80bfb30 0xb8000000 + 785200 3 com.companyX.FooApp 0xb80c0037 0xb8000000 + 786487 4 com.companyX.FooApp 0xb80dd8e8 0xb8000000 + 907496 5 com.companyX.FooApp 0xb8145397 spin_lock_wrapper + 1791 6 com.companyX.FooApp 0xb801ceb7 0xb8000000 + 118455 I have used the Apple docs on debugging translated apps and the information on this page to attach gdb to the app when it's running in Rosetta. The app immediately breaks into the debugger upon launch: Program received signal SIGTRAP, Trace/breakpoint trap. [Switching to thread 15107] 0x9151fdd4 in auto_fatal () (gdb) bt #0 0x9151fdd4 in auto_fatal () #1 0x91536d84 in Auto::Thread::get_register_state () #2 0x915372f8 in Auto::Thread::scan_other_thread () #3 0x91529be4 in Auto::Zone::scan_registered_threads () #4 0x91539114 in Auto::MemoryScanner::scan_thread_ranges () #5 0x9153b000 in Auto::MemoryScanner::scan () #6 0x9153049c in Auto::Zone::collect () #7 0x915198f4 in auto_collect_internal () #8 0x9151a094 in auto_collection_work () #9 0x96687434 in _dispatch_call_block_and_release () #10 0x9668912c in _dispatch_queue_drain () #11 0x96689350 in _dispatch_queue_invoke () #12 0x966895c0 in _dispatch_worker_thread2 () #13 0x966896fc in _dispatch_worker_thread () #14 0x965a97e8 in _pthread_body () (gdb) I have no idea where to start with this. It looks like the Garbage Collector is failing very badly. Are garbage-collected PowerPC apps not supported in Rosetta? I can't see any mention of this limitation in the docs if so. Does anyone have any ideas?

    Read the article

  • How reliable is DateTime.Utc in Silverlight applications?

    - by Edward Tanguay
    I have a silverlight application which users will be running in various time zones. The applications load their data from the server at one time, then cache it in IsolatedStorage. When I make changes to the data on the server, I want to be able to change the "last updated time" so that all applications download the newest data the next time they check this date. However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours. My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow. Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases? If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful thoroughly understand how to solve this in silverlight apps.

    Read the article

  • How to transform coordinate from WGS84 to a coordinate in a projection in PROJ.4?

    - by Sanoj
    I have a GPS-coordinate in WGS84 that I would like to transform to a map-projection coordinate in SWEREF99 TM using PROJ.4 in Java or proj4js in JavaScript. Its hard to find documentation for PROJ.4 and how to us it. If you have a good link, please post it as a comment. The PROJ.4 parameters for SWEREF99 TM is +proj=utm +zone=33 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs I have tried to use a PROJ.4 Java library and tried this code and values: String[] proj4_w = new String[] { "+proj=utm", "+zone=33", "+ellps=GRS80", "+towgs84=0,0,0,0,0,0,0", "+units=m", "+no_defs" }; Projection proj = ProjectionFactory.fromPROJ4Specification(proj4_w); Point2D.Double testLatLng = new Point2D.Double(55.0000, 12.7500); Point2D.Double testProjec = proj.transform(testLatLng, new Point2D.Double()); This give me the point Point2D.Double[5197915.86288144, 1822635.9083898761] but I should be N: 6097106.672, E: 356083.438 What am I doing wrong? and what method and parameters should I use instead? The correct values is taken from Lantmäteriet. I am not sure if proj.transform(testLatLng, new Point2D.Double()); is the right method to use.

    Read the article

  • How reliable is DateTime.UtcNow in Silverlight applications?

    - by Edward Tanguay
    I have a silverlight application which users will be running in various time zones. These applications load their data from the server upon start up, then cache it in IsolatedStorage. When I make changes to the data on the server, I want to be able to change the "last updated time" so that all silverlight clients download the newest data the next time they check this date. However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours. My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow. Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases? If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful to thoroughly understand how to solve this in silverlight apps.

    Read the article

  • ibm informix spatial datablade select statement error

    - by changed
    Hi I am using IBM informix spatial datablade module for some geo specific data. I am trying to find points in table xmlData lying in a specified region. But i am getting this error for select statement. SELECT sa.pre, sa.post FROM xmlData sa WHERE ST_Contains( ST_PolyFromText('polygon((2 2,6 2,6 6,2 6,2 2))',6),sa.point) Query: select count(*) as mycnt fromText('polygon((2 2,6 2,6 6,2 6,2 2))',6),sa.point) Error: -201 [Informix][Informix ODBC Driver][Informix]A syntax error has occurred. (SQLPrepare[-201] at /work/lwchan/workspace/OATPHPcompile/pdo_informix/pdo_informix/informix_driver.c:131) If any one can help me with this. CREATE TABLE xmlData (row_id integer NOT NULL, x integer, y integer, tagname varchar(40,1), point ST_POINT ); EXECUTE FUNCTION SE_CreateSRID(0, 0, 250000, 250000, "use the return value in next query last column"); INSERT INTO geometry_columns (f_table_catalog, f_table_schema, f_table_name, f_geometry_column, geometry_type, srid) VALUES ("mydatabase", -- database name "informix", -- user name "xmlData", -- table name "point", -- spatial column name 1, -- column type (1 = point) 6); -- srid //use value returned by above query. INSERT INTO xmlData VALUES ( 1, 20,20, 'country', ST_PointFromText('point (20 20)',6) ); INSERT INTO xmlData VALUES ( 1, 12,13, 'sunday', ST_PointFromText('point (12 13)',6) ); INSERT INTO xmlData VALUES ( 1, 21,22, 'monday', ST_PointFromText('point (21 22)',6) ); SELECT sa.pre, sa.post FROM xmlData sa WHERE ST_Contains( ST_PolyFromText('polygon((1 1,30 1,30 30,1 30,1 1))', 6),sa.point); I am using following query as reference "ibm link". SELECT name, type, zone FROM sensitive_areas WHERE SE_EnvelopesIntersect(zone, ST_PolyFromText('polygon((20000 20000,60000 20000,60000 60000,20000 60000,20000 20000))', 5));

    Read the article

  • Twig templates, inheritances and block usage

    - by user846226
    I've created three templates using Twig. The first one has block A defined in it, the second one extends from the first one, but includes a third template which sets the content of block A. When loading, through the browser, the url which renders b.html.twig, the content in block A (defined by the 3th template) is not positioned block _A is defined. Example: <!-- base.html.twig --> {% block _css '' %} {% block _header '' %} {% block _content '' %} {% block _footer '' %} {% block _js '' %} <!-- layout.html.twig --> <!-- header and footer are placed in the raight zone --> {% extends ::base.html.twig %} {% block _header %} {% render "MyBundleBundle:Header:header" %} {% endblock %} {% block _footer %} {% render "MyBundleBundle:Footer:footer" %} {% endblock %} <!-- my_template.html.twig --> <!-- content is also placed in the right zone but css and js blocks in the included template are not placed where declared in base.html.twig --> {% extends MyBundleBundle::layout.html.twig %} {% block _content %} SOME_CONTENT {% include MyBundleBundle::my_included_template.html.twig %} {% endblock %} <!-- my_included_template.html.twig --> {% block _css %} <link.......................> {% endblock %} {% block _js %} <script..................> {% endblock %} MORE CONTENT BELONGING TO THE INCLUDED TEMPLATE What i expect here is, _css blocks content to appear on top of the page and _js block content at the bottom, but that's not happening. I hope you can see where i'm going wrong, thanks!

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >