Search Results

Search found 27958 results on 1119 pages for 'failed to load viewstate'.

Page 676/1119 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • File saving disabled 'Saving has been disabled by system admin'

    - by Gubuntu
    I have coded my own html website recently, and today wished to add a Google calender object to it. I have not put this website on the web because it is for my own personal use and I can't buy a domain. So I just have a folder on my pc that I load the index.html from now and then. As I was saying, today I got an error while trying to save the Google calender object in. I am system admin on my PC, in fact no one else uses but me, except when I have friends round, but for once my PC seems to think I'm some standard account user, because I couldn't save. I thought of clicking close and seeing if it came up with save as, but it didn't, it said 'Are you sure you want to close without saving?' or something along the lines of that, and 'Saving has been disabled by your system admin.' I couldn't do anything. I tried looking at the settings of the file, and it had me as read only in one of the selections, so I changed that to read & write, but to no avail. I did not save as root when I last edited the file, so I don't get what's going on. Help! P.S. This is on Ask Ubuntu not Superuser because it is on my Ubuntu PC and it appears to be a problem with Ubuntu not root or hardware.

    Read the article

  • Linux cron spamming me then stops about php/suhosin

    - by acidzombie24
    My server emails me when any messages goes to root. cron sends me messages. Today I got over 300 emails from my server all of which are PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/suhosin.so' - /usr/lib/php5/20090626+lfs/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 I have no idea why. I went to debug it however 5hours ago it stopped so theres nothing i could look at except maybe logs. Why did this maybe happen? Disk isn't full and I have enough ram available.

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • how do applications (and OS) handle very big files?

    - by DrStrangeLove
    For instance, i have video file which is 11.8 Gb, but my RAM memory only 2 Gb.. How does VLC (or other software) handle it? How do they load it into memory? I used VMMap tool (from sysinternals) to take a look at memory, and i saw: private 160000K working set 100000K Obviously, it's much less than 11.8 Gb -So how did it happen? This question is not only about video. I'd like to know how computer, in general, handles very large files.

    Read the article

  • No "Choose OS" screen after co-installation of Ubuntu 12.04

    - by Léon McGregor
    I don't have any technical knowledge of Ubuntu but wanted to install it on a partition. Step-by-step of what i did. used disk management in windows 7 to clear 50gb of space used a usb live ubuntu 12.04 installer in custom install mode used the installer's built in disk manager to set up 2gb swap space and 48gb ext4 space ran installation, with option to copy over documents and settings from windows 7 restarted after completion. After this, my computer automatically loads ubuntu 12 and skips the option to load windows 7. i know the files are still there, as i can see them in the file manager. When trying to fix with the win7 installer dvd it tries to repair the OS in drive [D:] i.e. it recognises the disc itself as the OS and ignores the [c:] files. I think, after browsing around here, similar problems suggest this is a problem with the boot loader, but if the win7 dvd won't work, then i don't have any way to fix this. does anyone know of a way to force the computer to show a "choose OS" screen?

    Read the article

  • Random timeouts in IIS7

    - by Cor-Paul
    Hi, I have a weird problem which I think is caused by my IIS7 installation on Vista 64 bit. I have a bunch of AJAX calls and JS dynamic file loads (~30) in a local application, and I get random timeouts (or so it seems) in my browser. In Chrome it looks like the page just stops loading (no HDD activity), in Firefox/Firebug I can see that some of the files are being loaded but they never actually finish. When I reload the page the same occurs, but for (random) other files that must be loaded. When I try to load one of those JS files concurrently (so during the timeout in FF) in another browser, the file loads there. So I am pretty sure the request can be handled by IIS. I am thinking about a limit or so on simultaneous requests from the same browser which is not working correctly, but I am pretty clueless on how to solve this. Does anyone recognize this problem and know a solution? Thanks!

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • EC2 out of space on root disk, moving it to ephemeral

    - by Joseph Misiti
    I am spawning a few test servers on ec2 that happen to be m1.larges. I am using these test servers for load balancin testing. Anyways, most of the servers I have used before have been backed by EBS, but these instances (ubuntu 11.04) obviously come with a lot of ephemeral space located @ /mnt. What I noticed that is happening is I am running on space on the root disk. I am trying out this tutorial http://www.turnkeylinux.org/docs/using-instance-storage moving my /home + /usr directories to /mnt and then remounting them. This works except it does not survive a reboot. Am I missing something here or is this tutorial not completely correct. How do I make space on my / drive so I can do stuff and survive re-boots.

    Read the article

  • Posting data from multiple servers routing through one server to client server

    - by Swaroop Kundeti
    I have 5 webservers behind Load balancer and we have a client server at other end. Client has white listed my 5 webserver public ip so that my webservers will post a file to the client server. Here the problem is my webservers is going to increase and i cannot always ask client to make my new webserver ip's white list. So i would like to make my infra this way, my webservers will post data to the client server routing from a single server. Like assume that web-1 is main server and the remaining 4 web servers will post data to client server routing through main web-1. I was told that this can be achieved by doing IP Tunneling. But i have no idea how to do that. Would be great for any kind of help.

    Read the article

  • PHP: Extensionless URLs in IIS7 (windows)? (for wordpress)

    - by smithym
    Hi there, I have recently installed wordpress but i would like to configure extensionless URLs .. I am using IIS7 but on a shared server. I presume i cna add something to web.config file?? I am little bit confused, in IIS7 and asp.net mvc it is done via code... but in PHP i don't think it is .... so the only alternative is to use a re-write module but i can't as I am on a shared server and can't install ISAPI stuff.. so I was wondering if there is a way to do the mapping i.e. when going to testme it would actually load testme.php Any advise really appreciated Thanks

    Read the article

  • Building optimal custom machine for Sql Server

    - by Chad Grant
    Getting the hardware in the mail any day. Hardware related to my question: x10 15.5k RPM SAS Segate Cheetah's x2 Adaptec 5405 PCIe Raid cards Motherboard has integrated SAS raid. Was thinking I would build 2 RAID 10 arrays one for data and one for logs The remaining 2 drives a RAID 0 for TempDB Will probably throw in a drive for OS. Does putting the Sql Server application / exe's on a raid make a difference and is there any impact of leaving the OS on a relatively slow disk compared to the raid arrays? I have 5/6 DBs combined < 50 gigs. With a relatively good / constant load. Estimating 60-7% reads vs writes. Planning on using log shipping as well if that matters. Any advice or suggestions?

    Read the article

  • How to set expiration date for external files? [closed]

    - by garconcn
    I have a site included lots of external files, most of them are gif format. I have no control on the external files, but have to use them(with permission). When I check the site using Google Pagespeed, I got very low score(31) even though the page load is fast. One of the high priority suggestion is to leverage browser caching by setting an expiration date. However, all the files are on external links. I have already set the expiration date for local files.

    Read the article

  • Is there a way to make NoScript always allow .pdf files?

    - by Ben
    I'm using Firefox with NoScript to stop the bad stuff. I've also told Acrobat Reader to load .pdf files in it's own window instead of inside the browser (because sometimes it locks up, and then I would have to restart the browser). However, whenever I come across a .pdf file, I always get a new tab completely covered by the NoScript box. Then, I can click anywhere in that page, and NoScript asks me if I'm sure I want to allow it. Then, Acrobat Reader is launched in its own window, but the Firefox tab remains, and I have to close it. It seems like NoScript is getting in the way of Acrobat's attempt to just open the file without making a new tab. Is there a way to tell NoScript to always allow .pdf files (Or any other suggestion to make that annoying blank tab go away by itself)?

    Read the article

  • Barcode scanner timing to excel

    - by Claire
    I have a barcode reader that will load the barcode to a excel spreadsheet - great - but now i need it to do 2 things: 1 - add a time stamp to next to the barcode that has been scanned. So Barcode goes to A1, need B2 to show time. 2- find the name (that is in a seperate excel sheet) that is linked to that barcode and add it to C1, next to the time and barcode. I use this method to time trail running - each person is issued a barcode and as they go through various points their barcode is scanned and a time is allocated to their name.

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Burning Xvid and Dvix Files problems

    - by chobo2
    Hi I am using windows 7 ultimate 64 bit edition and Nero 8. Recently I noticed that every time I go to nero and burn a dvix/xvid file as a data DVD and try to play it in a dvd player that supports these types it does not load up and play. I then go to my XP machine with Nero 6 on it and burn the exact same file as a data DVD and it works in the exact same dvd player. I am not sure why this is happening? I am not sure if it is windows 7, nero 8 or my actual dvd burner that is the problem. How should I go about and figure out what the problem is? Thanks

    Read the article

  • Boot disc isnt loading on MY system

    - by acidzombie24
    I am trying to update the firmware on my harddisk. I grabbed seagates windows setup tool which didnt boot into the app to update the firmware so I burned their iso image. Their ISO also doesnt boot and i vaguely remember something about windows not recognize my disc because of an EFI thing. It probably has nothing to do with it. Anyways, how do I boot into the disc? I tried going into advance options to boot directly to the disc and i get a blank screen. I can use ctrl+alt+del which reboot the system but other then that its blank and doesnt seem to load anything on the disc. The disc was a 7mb iso burnt using windows 7 built in iso burner (it suggest using it on seagates site). I have no idea what to do. Do any of you guys know what my problem may be? The media is DVD-R

    Read the article

  • Concurrent backups in SQL Server?

    - by Mikey Cee
    We currently have our backups managed by a third party company. There are a bunch of agent jobs created that take full backups (4 times a day) and transaction log backups (4 times an hour). We now want to manage our backups in house, but don't want to disable the third party's jobs until we are sure that we have everything configured correctly internally So I am proposing to have a short period (say, a couple of days) where backups are being taken both by the old and the new system. I am wondering what the ramifications of having these two different systems both manage backups, and the potential pitfalls of having backups taken simultaneously. Is this even supported? If so, and bearing in mind that the system can cope with one backup without any noticeable performance degradation, is it fairly logical to assume that it should be able to cope with two simultaneous backups? Currently the load on the server is fairly light and it rarely struggles. Any advice is appreciated

    Read the article

  • Issue 55 - Skin Object Tokens, Optimized Control Panel, OWS Validation and Security, RAD

    April 2010 Welcome to Issue 55 of DNN Creative Magazine In this issue we focus on the new Skin Object token method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control which covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use, etc. Following this we demonstrate how to install and use two Advanced DotNetNuke Admin Control Panels which are available for free from Oliver Hine. These control panels provide an optimized version of the admin control panel to improve performance and page load times, as well as a ribbon bar control panel which adds additional features. Next, we continue the Open Web Studio tutorials, this month we demonstrate some very advanced techniques for building a car parts application in Open Web Studio. Throughout the tutorial we cover form input, validation, how to use dependant drop down lists, populating checkbox lists and introduce a new concept of data level security. Data level security allows you to control which data a user can access within a module. To finish, we have part five of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to implement paging. This issue comes complete with 14 videos. Skinning: Skin Object Tokens for DotNetNuke 5 (8 videos - 64mins) Free Module: Advanced Optimized Control Panel by Oliver Hine (1 video - 11mins) Module Development Series: Form Validation, Dependant Drop Downs and Data Level Security in OWS (5 videos - 44mins) How to Implement Paging with DotNetMushroom RAD View issue 55 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 55 issues we have created 563 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What to do with a HP Itanium box ?

    - by VivekRJ
    One of my customers has a HP Itanium (Integrity Rx6600 I think) box. They want to use it for a running our apps (Linux based). I was initially hoping to put a ESXi on it and load Ubu 10.10 but I was surprised IA64 is declining : Windows discontinued support since 2008 Ubuntu 10.04 is last of support CentOS also unsure VMWare ESXi not supported What are people doing ? Are people running Ubuntu 10.04 on Itanium succesfully ? Also FreeBSD 8.2 says supports it - are they going to keep with the platform ?

    Read the article

  • Gigabyte Motherboard + Adaptec RAID = No Booting from any drives

    - by Farseeker
    I have a brand new PC, just out of the box. It has a Gigabyte GA-P55-USB3 motherboard. I also have an Adaptec ASR-2504 SAS RAID card with 2x 15k Seagate Barracuda SAS drives attached. After the motherboard init's its on-board RAID it then init's the Adaptec RAID. It detects all the RAID devices OK, but when it gets to Loading Operating System... (i.e. right before it should load the OS) it just sits there forever, doing nothing: If I force it to boot from the optical drive, you see it spin up for a few seconds then die down again. If I remove the Adaptec RAID card, everything works perfectly. As soon as it's plugged back in, it never gets past that stage. The RAID card should be perfectly fine (it was before), but I have raised a case with Adaptec anyway. Any suggestions on what I can try to get these two to play nicely together?

    Read the article

  • Enabling AES 256 GCM on Windows Server 2012 R2

    - by Feanaro
    I'd like to enable the use of the AES 256 GCM encryption instead of the AES 256 CBC. We already have ECC certificates based on ECDSA so that pre-requisite has been fullfilled. The certificate has a SHA-256 signature and uses a 256-bit ECC keyset. The ciphersuite I'd like to use: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384 This is our ciphersuite order: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P521, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P521, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256 Still when I check the website it says we use TLS 1.2 and ECDHE_ECDSA for key exchange AES_256_CBC encryption and SHA1 for message digest. I suspect it uses this suite for some reason: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256 When I remove that ciphersuite the site has a protocol mismatch and won't load the https anymore. Does anyone know how to enable the ciphersuite? Did I forget to set something in the registry or do I need to do something else to enable that specific suite. Thanks in advance!

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >