Search Results

Search found 21004 results on 841 pages for 'assembly load'.

Page 255/841 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Error instantiating Texture2D in MonoGame for Windows 8 Metro Apps

    - by JimmyBoh
    I have an game which builds for WindowsGL and Windows8. The WindowsGL works fine, but the Windows8 build throws an error when trying to instantiate a new Texture2D. The Code: var texture = new Texture2D(CurrentGame.SpriteBatch.GraphicsDevice, width, 1); // Error thrown here... texture.setData(FunctionThatReturnsColors()); You can find the rest of the code on Github. The Error: SharpDX.SharpDXException was unhandled by user code HResult=-2147024809 Message=HRESULT: [0x80070057], Module: [Unknown], ApiCode: [Unknown/Unknown], Message: The parameter is incorrect. Source=SharpDX StackTrace: at SharpDX.Result.CheckError() at SharpDX.Direct3D11.Device.CreateTexture2D(Texture2DDescription& descRef, DataBox[] initialDataRef, Texture2D texture2DOut) at SharpDX.Direct3D11.Texture2D..ctor(Device device, Texture2DDescription description) at Microsoft.Xna.Framework.Graphics.Texture2D..ctor(GraphicsDevice graphicsDevice, Int32 width, Int32 height, Boolean mipmap, SurfaceFormat format, Boolean renderTarget) at Microsoft.Xna.Framework.Graphics.Texture2D..ctor(GraphicsDevice graphicsDevice, Int32 width, Int32 height) at BrewmasterEngine.Graphics.Content.Gradient.CreateHorizontal(Int32 width, Color left, Color right) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Graphics\Content\Gradient.cs:line 16 at SampleGame.Menu.Widgets.GradientBackground.UpdateBounds(Object sender, EventArgs args) in c:\Projects\Personal\GitHub\BrewmasterEngine\SampleGame\Menu\Widgets\GradientBackground.cs:line 39 at SampleGame.Menu.Widgets.GradientBackground..ctor(Color start, Color stop, Int32 scrollamount, Single scrollspeed, Boolean horizontal) in c:\Projects\Personal\GitHub\BrewmasterEngine\SampleGame\Menu\Widgets\GradientBackground.cs:line 25 at SampleGame.Scenes.IntroScene.Load(Action done) in c:\Projects\Personal\GitHub\BrewmasterEngine\SampleGame\Scenes\IntroScene.cs:line 23 at BrewmasterEngine.Scenes.Scene.LoadScene(Action`1 callback) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Scenes\Scene.cs:line 89 at BrewmasterEngine.Scenes.SceneManager.Load(String sceneName, Action`1 callback) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Scenes\SceneManager.cs:line 69 at BrewmasterEngine.Scenes.SceneManager.LoadDefaultScene(Action`1 callback) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Scenes\SceneManager.cs:line 83 at BrewmasterEngine.Framework.Game2D.LoadContent() in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Framework\Game2D.cs:line 117 at Microsoft.Xna.Framework.Game.Initialize() at BrewmasterEngine.Framework.Game2D.Initialize() in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Framework\Game2D.cs:line 105 at Microsoft.Xna.Framework.Game.DoInitialize() at Microsoft.Xna.Framework.Game.Run(GameRunBehavior runBehavior) at Microsoft.Xna.Framework.Game.Run() at Microsoft.Xna.Framework.MetroFrameworkView`1.Run() InnerException: Is this an error that needs to be solved in MonoGame, or is there something that I need to do differently in my engine and game?

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • Random users randomly being unable to connect to my static content domain

    - by jls33fsls
    I store all of my images, js, and css files on a separate domain to try and speed up page load times (it isn't a CDN, just a separate domain on the same server). This works fine for 99% of the users, 99% of the time. However, there are users that randomly are unable to connect to the static content domain for periods of 1-5 hours. They can go to the main site, but no images will load and everything is just white because no css is being loaded. If they go to the static content domain itself, the page just idles for a while and then times out with a blank white page, no error messages. I have no idea what could be causing this, and it hasn't happened to me, any ideas? I am running Apache on CentOS 5.5.

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • How do I boot a virtual machine image from my network?

    - by Haabda
    I have many machines which require the same configurations. My goal is to boot them all from the network and load a virtual machine. It would be wonderful to have one image for all of our customer service machines. That way, I could load the virtual image, perform updates, and know the next time they boot up they will have all the changes. Ideally, the machines would store the image locally and only download a new image if there has been a change. With all the information out there on "desktop virtualization", "PXE booting", and "virtual machines", I feel lost. I have been reading for hours and feel like I have only just scratched the surface. I would like to do this using open source or free software. Any suggestions?

    Read the article

  • NHibernate and Stored Procedures in C#

    - by Jess Nickson
    I was recently trying and failing to set up NHibernate (v1.2) in an ASP.NET project. The aim was to execute a stored procedure and return the results, but it took several iterations for me to end up with a working solution. In this post I am simply trying to put the required code in one place, in the hope that the snippets may be useful in guiding someone else through the same process. As it is kind’ve the first time I have had to play with NHibernate, there is a good chance that this solution is sub-optimal and, as such, I am open to suggestions on how it could be improved! There are four code snippets that I required: The stored procedure that I wanted to execute The C# class representation of the results of the procedure The XML mapping file that allows NHibernate to map from C# to the procedure and back again The C# code used to run the stored procedure The Stored Procedure The procedure was designed to take a UserId and, from this, go and grab some profile data for that user. Simple, right? We just need to do a join first, because the user’s site ID (the one we have access to) is not the same as the user’s forum ID. CREATE PROCEDURE [dbo].[GetForumProfileDetails] ( @userId INT ) AS BEGIN SELECT Users.UserID, forumUsers.Twitter, forumUsers.Facebook, forumUsers.GooglePlus, forumUsers.LinkedIn, forumUsers.PublicEmailAddress FROM Users INNER JOIN Forum_Users forumUsers ON forumUsers.UserSiteID = Users.UserID WHERE Users.UserID = @userId END I’d like to make a shout out to Format SQL for its help with, well, formatting the above SQL!   The C# Class This is just the class representation of the results we expect to get from the stored procedure. NHibernate requires a virtual property for each column of data, and these properties must be called the same as the column headers. You will also need to ensure that there is a public or protected parameterless constructor. public class ForumProfile : IForumProfile { public virtual int UserID { get; set; } public virtual string Twitter { get; set; } public virtual string Facebook { get; set; } public virtual string GooglePlus { get; set; } public virtual string LinkedIn { get; set; } public virtual string PublicEmailAddress { get; set; } public ForumProfile() { } }   The NHibernate Mapping File This is the XML I wrote in order to make NHibernate a) aware of the stored procedure, and b) aware of the expected results of the procedure. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="[namespace]" assembly="[assembly]"> <sql-query name="GetForumProfileDetails"> <return-scalar column="UserID" type="Int32"/> <return-scalar column="Twitter" type="String"/> <return-scalar column="Facebook" type="String"/> <return-scalar column="GooglePlus" type="String"/> <return-scalar column="LinkedIn" type="String"/> <return-scalar column="PublicEmailAddress" type="String"/> exec GetForumProfileDetails :UserID </sql-query> </hibernate-mapping>   Calling the Stored Procedure Finally, to bring it all together, the C# code that I used in order to execute the stored procedure! public IForumProfile GetForumUserProfile(IUser user) { return NHibernateHelper .GetCurrentSession() .GetNamedQuery("GetForumProfileDetails") .SetInt32("UserID", user.UserID) .SetResultTransformer( Transformers.AliasToBean(typeof (ForumProfile))) .UniqueResult<ForumProfile>(); } There are a number of ‘Set’ methods (i.e. SetInt32) that allow you specify values for any parameters in the procedure. The AliasToBean method is then required to map the returned scalars (as specified in the XML) to the correct C# class.

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region)

    - by Jawad
    According to the FAQ's I am not sure if my question is a ok to ask or will be closed or should I post it in the meta or even I would blame some one for downvoting it. However it is one that has been bugging me since the trouble strated. Let me explain. I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • What would be better in my case - apache, nginx or lighttpd ?

    - by The Devil
    Hey everybody, I'm writing a php site that's expected to get about 200-300 concurrent users browsing it. When initializing the application will load about 30 php classes, some 10 maybe 15 images and a couple of css files. So my question is what else can I do (except optimizing my code and using apc/eaccelerator for php) to get as close as possible to those numbers of concurrent users ? Currently we haven't chosen a server for the site to be hosted on but most probably it'll be a VPS Dual core + 2 or maybe 4gb ram. Is it possible for such a server to handle that load ? Also how could I test it myself and be sure that it'll be able to handle it ? Thanks in advance, Me

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • How to install Ubuntu 13.10 on Hybrid Disk alongside Windows 8.1

    - by user205691
    I am having trouble installing Ubuntu 13.10 on HP Envy 4-1046tx ultrabook. When i bought this, it came with windows 7 pre-installed, but i upgraded it to 8 and now recently to 8.1. But somehow, i feel 8.1 is slower or something went wrong with the upgrade and made my system slow. I want to try Dual booting Ubuntu 13.10 with windows 8.1 The system recovery drive has windows 7 recovery files. SSD has 4GB allocated to windows 8 (i think for hibernation/rapid start). 25GB of SSD is free and i want to install ubuntu on this SSD pointing it to "/" I will also shrink the windows partition (the only other partition available apart from recovery & SSD) to free up 100GB and allocate this space to "/home" during ubuntu installation. I tried the above steps while on windows 8, but not successful. Ubuntu installation went fine, but the grub was not loaded. I tried to deploy linux via EasyBCD, but after that also, selecting linux in the boot would load grub on command prompt and do nothing. While ubuntu installation, i also deleted the raid drivers with sudo dmraid -rE, but still ubuntu didnt recognize my windows. I think i am missing some steps, so this time i want to do it right with proper info before starting the process. My requirements: dual boot Ubuntu with windows 8.1 c:\ shrinked windows with 300GB on sda1, 100GB for /home on sda1 & ubuntu installed on 25GB SSD volume sda2 (this is mSata i think) GRUB or EFI that helps me load both OS properly without breaking anything SWAP partition can be added if needed on sda1 (4gb?)? I have backed up my drive and have a 16GB usb3.0 with ubuntu loaded. I hope i have mentioned everything i need and know.. All i need now is some guidance and what to do right so that this installation goes as planned :)

    Read the article

  • Website Ethics / legal issues, image copyrights

    - by RailsN00b
    Ignoring the technical implementation of a website for a second, assume a website that is similar to twitter but with pictures. A user say something and puts a picture of whatever they said. As the nature of the internet, the images will most likely not be his/hers image. There are 2 options that I see for dealing with this: 1. The user will post a URL of the picture and the website will pull the picture from that URL everytime someone enters that page 2. The website will save the image in its own database of images and display the image to the visitors 'locally' The problem with option #1, while it saved storage, I see an issue with 'stealing' other websites bandwidth and if my website has many many visitors it could cost the image-hosting websites a lot and possible even crash it if the server can't handle the load. The problem with opion #2, while it saves the load to other websites, it practically takes pictures that could have copyright on them. Which option is better to implement in terms of legal issues and ethics? When do I need to contact another website to request permission to use the images from that site? Does anyone really care about that anymore. Where can I read about this?

    Read the article

  • Setting up dual monitors, Xorg.conf issues

    - by JTS
    I just got a new computer (W520, Graphics card nVidia GF106 [Quadro 2000]) and installed ubuntu on it using wubi. I have everything working, so I wanted to set it up to be able to use two monitors with an extended screen. I figured I had to edit Xorg.conf, but the file didnt exist. So I tried to create it by booting in recovery mode, and executing Xorg -configure but I am getting these errors: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) [drm] No DRICreatedPCIBusID symbol Number of created screens does not match number of detected devices. Configuration failed. ddxSigGiveUp: Closing log Any idea how I can get Xorg -configure to work, so that I can have an xorg.conf file that I can edit to enable twinview? EDIT: Another way I could ask the same question to solve this problem is, why can't I boot with an xorg.conf file generated by nvidia-xconfig? Is there something in the generated xorg.conf file that might need editing?

    Read the article

  • Parsing T-SQL – The easy way

    - by Dave Ballantyne
    Every once in a while, I hit an issue that would require me to interrogate/parse some T-SQL code.  Normally, I would shy away from this and attempt to solve the problem in some other way.  I have written parsers before in the the past using LEX and YACC, and as much fun and awesomeness that path is,  I couldnt justify the time it would take. However, this week I have been faced with just such an issue and at the back of my mind I can remember reading through the SQLServer 2012 feature pack and seeing something called “Microsoft SQL Server 2012 Transact-SQL Language Service “.  This is described there as : “The SQL Server Transact-SQL Language Service is a component based on the .NET Framework which provides parsing validation and IntelliSense services for Transact-SQL for SQL Server 2012, SQL Server 2008 R2, and SQL Server 2008. “ Sounds just what I was after.  Documentation is very scant on this so dont take what follows as best practice or best use, just a practice and a use. Knowing what I was sort of looking for something, I found the relevant assembly in the gac which is the simply named ,’Microsoft.SqlServer.Management.SqlParser’. Even knowing that you wont find much in terms of documentation if you do a web-search, but you will find the MSDN documentation that list the members and methods etc… The “scanner”  class sounded the most appropriate for my needs as that is described as “Scans Transact-SQL searching for individual units of code or tokens.”. After a bit of poking, around the code i ended up with was something like [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $Sql = "Create Procedure MyProc as Select top(10) * from dbo.Table" $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $Start =0 $End = 0 $State =0 $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null $TokenPrs $Sql.Substring($Start,($end-$Start)+1) }catch{ $TokenPrs = $null } } As you can see , the $Sql variable holds the sql to be parsed , that is pushed into the $Parser object using SetSource,  and then we will use GetNext until the EOF token is returned.  GetNext will also return the Start and End character positions within the source string of the parsed text. This script’s output is : TOKEN_CREATE Create TOKEN_PROCEDURE Procedure TOKEN_ID MyProc TOKEN_AS as TOKEN_SELECT Select TOKEN_TOP top TOKEN_INTEGER 10 TOKEN_FROM from TOKEN_ID dbo TOKEN_TABLE Table note that the ‘(‘, ‘)’  and ‘*’ characters have returned a token type that is not present in the Microsoft.SqlServer.Management.SqlParser.Parser.Tokens Enum that has caused an error which has been caught in the catch block.  Fun, Fun ,Fun , Simple T-SQL Parsing.  Hope this helps someone in the same position,  let me know how you get on.

    Read the article

  • Using Quickly for text-heavy app

    - by Kevin
    I am trying to create a small app that displays documentation. When it is run, the application window will display a main menu with buttons labeled 'Document 1', 'Document 2', etc. If a user clicks on one of those buttons, the text from the corresponding document will be displayed in the window. Very basic. The text documents range in length from 1000 to 5000 words, and they need basic formatting (bold, italic, maybe one or two font choices). My question is this: what is the best way to store and display long blocks of formatted text, using Quickly? There seems to be a few options: (1) I could load the text blocks into long python strings, (2) I could load the text from text files, or (3) I could somehow copy and paste the formatted text into Glade. In the first two options, I'm not sure how I would format the text (add italic and bold, for instance) once it was loaded. I have experience with PHP/MySQL/HTML/CSS/Javascript, but I'm new to Python. Any help would be appreciated.

    Read the article

  • Debian Linux server hangs after a week or so

    - by Alex Flo
    I have 2 Debian Linux 6.0.4 servers that have a strange behaviour: after 5-7-10 days they hang. By this I mean the servers need to be restarted and before that ping won't answer. I've been struggling with this problem for a couple of months now and here's some thoughts/what I tried without being able to solve the problem. I changed the RAM on a server. Being 2 different servers I doubt that it could be something related to hardware as a 3rd identical server won't have this problem. I logged the server load and when it crashes the load is fine (quite low) I cannot find anything in the server logs, logs are fine till the server freezes. I don't have access to console unfortunately. While I have years of admin experience I have never encountered such an issue and right now I have no idea where else to investigate. If you have an idea of what I could try in order to fix the problem please share it with me:-)

    Read the article

  • Android Card Game Database for Deck Building

    - by Singularity222
    I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble.

    Read the article

  • JavaScript loaded external content SEO

    - by user005569871
    I wonder what is the best way to have Javascript loaded content indexed by search engines. I know that search engines don't execute Javascript, but I am thinking more of an progressive enchantment. I am creating a responsive website, and on the home page I will have some sections about most visited products and recommended product that I plan to load depending on the device detected. These products will be in sliders with thumbnail images and names of the products. If mobile is detected slider content will not load, ant the link to the external page will be shown. I know that external content will be indexed via link to those resources. Where will the users be directed from search in this case? To the external page or home page? Will it be bad for SEO if I show only product names on front page so they can be indexed and hide them with CSS? What is the best way to index that content and possibly direct users from search to home page? Also, i've seen the Ajax crawling but iI would like not to use that if there is any better way.

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • nfs server on cygwin slow

    - by Weltenwanderer
    The setup: We run an instance of cygwin nfsd on a Windows 2008 Server (Xeon 3,2 GHz). There are several Sun Solaris and SunOS machines accessing the shares. This is the exports file: /disk3 (rw,all_squash) /disk2 (rw,all_squash) Those paths are soft linked to the relevant cygdrive/d/path/to/dir paths. Some of the folders contain up to 10k files. The Problem: ls -la on the mounted folder on the sun boxes takes 2 - 3 minutes and the general read performance is really bad. cat filename displays the file in slow bursts and this hurts performance on tasks that access those shared files heavily. Processor load is not the issue, the nfs server idles most of the time, the cygwin tasks never get over 1% load.

    Read the article

  • Recent update killed unity 3d launcher

    - by Steve
    I am scratching my head on this one, a lot of things are still new to me. I updated 126 packages just now through the update manager, and upon reboot everything works fine except the unity launcher. It's just a dark space. The dash still works, as does the top panel and docky. When I try: unity --replace I end up with this and then an indefinite hang: (compiz:3689): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed WARN 2012-09-23 02:18:29 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubiquity-gtkui.desktop' WARN 2012-09-23 02:18:30 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubuntuone-installer.desktop' ERROR 2012-09-23 02:18:30 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-writer.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-calc.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-impress.desktop' is using a deprecated format for its actions that will be dropped soon. Setting Update "main_menu_key" Setting Update "run_key" Unfortunately I cannot make heads or tails of this. Anyone, please help?

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region) [closed]

    - by Jawad
    Possible Duplicate: How to overcome politics of the net (Google translate code refuses to work from a specific region) I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • I want to version control my entire slice

    - by Tom
    I'm renting a slice (i.e., a VPS) from Slicehost. I've a spent a day or two filling up /usr with my favorite packages, /etc with configs and init scripts, and so on. Now I want to: save this whole setup somewhere (e.g., to load onto another machine). see what changes I've made to which files revert changes, tag revisions, and all that other good version control stuff Saving a disk image gives me (1), but not (2) and (3). Using Subversion (svn import / svn://someotherhost) might give me all three, but I expect problems if I actually try to check a project out into / and maintain .svn directories in root-owned areas. And to load my setup onto a fresh slice, I'd need to install an svn client on it first. Is there a good way to do what I want to do?

    Read the article

  • Incorporating libs into module pattern

    - by webnesto
    I have recently started using require.js (along with Backbone.js, jQuery, and a handful of other JavaScript libs) and I love the module pattern (here's a nice synopsis if you're unfamiliar: http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth). Something I'm running up against is best practices on incorporating libs that don't (out of the box) support the module pattern. For example, jQuery without modification is going to load into a global jQuery variable and that's that. Require.js recognizes this and provides an example project for download with a (slightly) modified version of jQuery to incorporate with a require.js project. This goes against everything I've ever learned about using external libs - never modify the source. I can list a ton of reasons. Regardless, this is not an approach I'm comfortable with. I have been using a mixed approach - wherein I build/load the "traditional" JS libraries in a "traditional" way (available in the global namespace) and then using the module pattern for all of my application code. This seems okay to me, but it bugs me because one of the real beauties of the module pattern (no globals) is getting perverted. Anyone else got a better solution to this problem?

    Read the article

  • Severe latency only on one machine and only when accessing intranet site

    - by Joe M.
    I have one desktop machine that is having consistently high latency only when trying to load a page from an intranet site. Using the Chrome Developer Tools, the site shows a "Waiting" time of 4-5 seconds each page load. Other machines have <50ms, and the problem machine loads regular internet sites with <1s latency, so the problem is only on one machine and only when accessing the intranet site. This is a small business and all the hosts are on 192.168.0.1/24 I would have suspected a connection issue with the problem machine but normal internet sites are not having latency. Then I would have looked at connection issues with the intranet web server but other machines are not having latency to it. What else can I look at to troubleshoot this?

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >