Search Results

Search found 7706 results on 309 pages for 'checked'.

Page 149/309 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • "Opportunity" to take over maintenance of a small internal website. What should I do?

    - by Dan
    I have been offered an "opportunity" to take over maintenance of a small internal website run by my group that provides information about schedules and photos of events the groups done. My manager sent me the link to the site and checked it out. The site looked clean and neat but loaded in ~5 seconds. I thought this was a little long considering the site really didn't contain a lot of content. This prompted me to take a look under the hood at the pages source code. To my horror it'd been totally hacked together using nested tables! I'm new so I really can't say no to this "opportunity" so what should I do with it? Every fiber of my being feels that the only correct thing to do is over hall the site using CSS, Div's, Span's and any other appropriate tags that a sane/good web developer would used to begin with instead of depending on the render incentive magic of tables. But I'd like to ask programmers with more experienced then me, who have been in this situation. What should I do? Is my only realistic option to leave the horror as is and only adjusting the content as requested? I'm really torn between good development and the corporate reality I'm part of. Is there some kind of middle ground where things can be made better even if they're not perfect? Thanks ahead of time.

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • Scene graphs and spatial partitioning structures: What do you really need?

    - by tapirath
    I've been fiddling with 2D games for awhile and I'm trying to go into 3D game development. I thought I should get my basics right first. From what I read scene graphs hold your game objects/entities and their relation to each other like 'a tire' would be the child of 'a vehicle'. It's mainly used for frustum/occlusion culling and minimizing the collision checks between the objects. Spatial partitioning structures on the other hand are used to divide a big game object (like the map) to smaller parts so that you can gain performance by only drawing the relevant polygons and again minimizing the collision checks to those polygons only. Also a spatial partitioning data structure can be used as a node in a scene graph. But... I've been reading about both subjects and I've seen a lot of "scene graphs are useless" and "BSP performance gain is irrelevant with modern hardware" kind of articles. Also some of the game engines I've checked like gameplay3d and jmonkeyengine are only using a scene graph (That also may be because they don't want to limit the developers). Whereas games like Quake and Half-Life only use spatial partitioning. I'm aware that the usage of these structures very much depend on the type of the game you're developing so for the sake of clarity let's assume the game is a FPS like Counter-Strike with some better outdoor environment capabilities (like a terrain). The obvious question is which one is needed and why (considering the modern hardware capabilities). Thank you.

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Booting Ubuntu 13.10 form USB on server with no OS (Dell PowerEdge T110)

    - by user35581
    I have a Dell PowerEdge T110 (Xeon 1220v2) server with no OS. From my mac, I was able to save the Ubuntu 13.10 for server iso (x86) and used UNetbootin to save the iso to a USB stick (2GB). I was hoping to boot the server from USB, and all seemed to be going well, BIOS even detected my USB drive when I plugged it in, but for some reason I'm getting a "Missing operating system" error. I checked the USB drive and it appears that UNetbootin put the correct files on it form a cursory glance (although I'm not entirely sure what I should be looking for). Should I be able to boot a server with no OS from a UNetbootin created Ubuntu 13.10 USB? And if so, why might BIOS not find the right files? I had read that there is a USB Emulation setting in BIOS, but I haven't been able to find this in the menus. My understanding is that by default, this is set to on. I might try wiping the USB stick and running UNetbootin again. The USB is formatted to MS-DOS FAT16 (should it be formatted some other way?).

    Read the article

  • Installing Ubuntu along with windows 7 on shrunk partition

    - by Thabo
    I am new to Ubuntu OS and ask Ubuntu community. First this is not a duplicate question. Actually this a question which is a summery of all solutions and questions were posted in this community, related to Install Ubuntu along with Windows 7. I have bought a new Hp laptop with its original windows 7.I want to install Ubuntu along with windows 7 64 bit. I ran the Ubuntu 12.4 Desktop installation CD. But Ubuntu installer doesn't show the "along with windows 7 option"only it is showing two options. I read some questions and answers posted on this community. Specially following link Ubuntu 12.04 does not see windows already install on my computer (dual installation) I tried following thinks, I ran the terminal in live CD and tried sudo dmraid -rE command and dmraid remove command .But terminals says there is no dmraid partitions. So I tried another scenario checked my partitions with g parted.There are some partitions labeled C,HP tools,Recovery and System. C is containing windows 7 Files. So I shrank the volume of C Drive. Now I have 50000Mb of unallocated disk. I tried with Gparted to create a partition on that allocated space.It says some thing that you can't create more than four primary partition.Of course all other four partitions were created on widows are actually type of primary partition. So I went back to Windows 7 and tried to create a new volume on unallocated space.But unfortunately it says,If i create a new volume it will be the type of Dynamic partition.It says we cant boot another OS from that partition. So i cancelled that step. Now i have 50000Mb unallocated space but how can i install Ubuntu on that partition without harming the existing Windows 7? Because still I have only two options: Erase and install Ubuntu. Try something else. (I can see my unallocated space by going to "something else" option.)

    Read the article

  • Game: Age of Empires sound good but video "out of range"

    - by Ezekiel
    I'm new to the Ubuntu realm. Currently i'm using Linux Mint 12 with Wine 1.4 and PLAYONLINUX as game loading/playing programs. Video card is MSI GeForce FX5200 (NVIDIA) and is 3d enabled. I can play "Call of Duty 5 demo just fine. My real love is the Age of Empires series games. I loaded the WINE version of AOE 1 demo. No sound and no picture. Black screen with "Out of Range" window in red. I loaded my CD version of AOE 1 through PLAYONLINUX. I get the sound just fine but again the black screen with "Out of Range" window in red. I have used all the monitor settings in both the "settings" and in winecfg. None of the eight monitor options worked in any combination. I have checked all the questions and blogs on this error and tried all I found and no one seems to come up with a real fix. I guess I need to know exactly what the "Out of Range" means. Any help? Anywhere? Thanks

    Read the article

  • After upgrading to trusty, ALSA midi connection (aconnect) doesn't seem to work right

    - by SougonNaTakumi
    Previously in kubuntu 13.10 I was able to open vmpk or plug in a midi keyboard, and provided that TiMidity was running in server mode, I could run aconnect [keyboard port (129:0 for vmpk)] 14:0 aconnect 14:0 128:0 and I could play the keyboard and get sound. But now, a while after upgrading to trusty, I tried to do that, and didn't get any sound. TiMidity itself still plays files fine, but if I try to play them with aplaymidi, I still just get silence. Oddly, the midi files are clearly being read. When I ran (where 130:0 was vmpk's input port) aplaymidi -p 130:0 ~/path/to/midi.mid vmpk was highlighting notes on the piano as if it were playing the midi. One time I tried this, TiMidity (?) very briefly played a fraction of a second of the first chord of my song before everything went silent and vmpk just highlighted the first voice on the keyboard as usual. Now the weirdest part of this is that probably about 40% of the time, when I've played at least one note with either aplaymidi or vmpk, when I run aconnect -x I get a sudden burst of a note or chord from my speakers (that is, if I played one note, I get a note; if I played multiple sequential notes, they turn into a chord), as if the notes were being queued up but not being played and that somehow liberated them. I have no idea what's going on there. A little while ago I remember having a problem with Audacity playing wav files sped up and also locking up if I tried to pause it, which it stopped doing when I set the audio devices to the actual audio devices rather than pulse. But now when I checked again, it's doing the opposite: it won't play audio at all and/or acts weirdly if I don't set the audio devices to pulse, and either way will very occasionally randomly do the speeding up thing regardless. Oddly in the midst of what's looking like a pretty screwed up sound system, sound in VLC and Firefox has been working fine and if I play a wav file with aplay ~/path/to/sound.wav that works fine too. Any idea what I could do to figure out what's wrong with ALSA and/or fix it?

    Read the article

  • which style of member-access is preferable

    - by itwasntpete
    the purpose of oop using classes is to encapsulate members from the outer space. i always read that accessing members should be done by methods. for example: template<typename T> class foo_1 { T state_; public: // following below }; the most common doing that by my professor was to have a get and set method. // variant 1 T const& getState() { return state_; } void setState(T const& v) { state_ = v; } or like this: // variant 2 // in my opinion it is easier to read T const& state() { return state_; } void state(T const& v) { state_ = v; } assume the state_ is a variable, which is checked periodically and there is no need to ensure the value (state) is consistent. Is there any disadvantage of accessing the state by reference? for example: // variant 3 // do it by reference T& state() { return state_; } or even directly, if I declare the variable as public. template<typename T> class foo { public: // variant 4 T state; }; In variant 4 I could even ensure consistence by using c++11 atomic. So my question is, which one should I prefer?, Is there any coding standard which would decline one of these pattern? for some code see here

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

  • Using local repository with vmbuilder and https

    - by Onitlikesonic
    I seem to be having problems using vmbuilder with a local https mirror "--mirror=https:///archive.ubuntu.com/ubuntu/" as shown below: Process (['/usr/sbin/debootstrap', '--arch=amd64', 'precise', '/tmp/tmpYc0cOktmpfs', '<my_internal_server>/ubuntu/']) returned 1. stdout: I: Retrieving Release E: Failed getting release file <my_internal_server>/ubuntu/dists/precise/Release , stderr: 2012-10-18 10:36:36,429 INFO : Unmounting tmpfs from /tmp/tmpYc0cOktmpfs Traceback (most recent call last): File "/usr/bin/vmbuilder", line 24, in <module> cli.main() File "/usr/lib/python2.7/dist-packages/VMBuilder/contrib/cli.py", line 216, in main distro.build_chroot() File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 83, in build_chroot self.call_hooks('bootstrap') File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 67, in call_hooks call_hooks(self, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 165, in call_hooks getattr(context, func, log_no_such_method)(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/distro.py", line 136, in bootstrap self.suite.debootstrap() File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/dapper.py", line 269, in debootstrap run_cmd(*cmd, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 120, in run_cmd raise VMBuilderException, "Process (%s) returned %d. stdout: %s, stderr: %s" % (args.__repr__(), status, mystdout.buf, mystderr.buf) VMBuilder.exception.VMBuilderException: Process (['/usr/sbin/debootstrap', '--arch=amd64', 'precise', '/tmp/tmpYc0cOktmpfs', '<my_internal_server>/ubuntu/']) returned 1. stdout: I: Retrieving Release E: Failed getting release file <my_internal_server>/ubuntu/dists/precise/Release , stderr: I've checked that the files are in the correct place and i'm able to setup this using http instead of https. However this server will be providing https access only to the repos, the http is only temporarily open. This might be due to the certificate not being valid on the https (since it's self signed) or due to the fact that vmbuilder doesn't support https? In either case how can i get this to work? (If it's the case of the invalid certificate I don't mind ignoring any checks)

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

  • Problems updating and re-installing VLC Ubuntu 11.10

    - by irma claeys
    I can't seem to fix or update, upgrade VLC AT ALL; this is what I get when trying to install, I had vlc before, but was not working properly so I uninstalled it; then I try to Install again to have the full properties. So now I cannot install as I get this message, I have tried everything and have checked with your site for additional info; any suggestions to someone who is new to Ubuntu and Linux altogether?? Package dependencies cannot be resolved This error could be caused by required additional software packages which are missing or not installable. Furthermore there could be a conflict between software packages which are not allowed to be installed at the same time The following packages have unmet dependencies: vlc: Depends: vlc-nox (= 1.1.12-2~oneiric1) but 2.1.0~~git20121022+r2158-0~r88~oneiric1 is to be installed Depends: libaa1 (>= 1.4p5) but 1.4p5-38build1 is to be installed Depends: libavcodec-extra-53 (>= 4:0.7-1) but 4:0.7.6ubuntu0.11.10.1+medibuntu1 is to be installed Depends: libavutil-extra-51 (>= 4:0.7-1) but 4:0.7.6ubuntu0.11.10.1+medibuntu1 is to be installed Depends: libc6 (>= 2.8) but 2.13-20ubuntu5.3 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.4-2ubuntu1.2 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.1-9ubuntu3 is to be installed Depends: libqtcore4 (>= 4:4.7.0~beta1) but 4:4.7.4-0ubuntu8.2 is to be installed Depends: libqtgui4 (>= 4:4.5.3) but 4:4.7.4-0ubuntu8.2 is to be installed Depends: libsdl-image1.2 (>= 1.2.10) but 1.2.10-2.1 is to be installed Depends: libsdl1.2debian (>= 1.2.10-1) but 1.2.14-6.1ubuntu4 is to be installed Depends: libstdc++6 (>= 4.6) but 4.6.1-9ubuntu3 is to be installed Depends: libxcb-xv0 (>= 1.2) but 1.7-3 is to be installed Depends: zlib1g (>= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu3 is to be installed

    Read the article

  • ubuntu 12.04 desktop Error: unknown command 'gfxmode'. Pressing any key continues

    - by Andy
    Premise: linux noobie here, I have the same issue as OP: fresh 12.04 desktop, changed grub with grub customizer, now I get a: unknown command 'gfxmode' press ...etc was asked to "re-post" this question and link to this thread which I refer to above. I have tried what Tarek said, and nothing seems to work, I find two lines with gfxmode: function gfxmode { gfxmode \$linux_gfx_mode Note: not sure if it matter but in the error the two single quotes before gfxmode are not the same, the first is a slanted quote mark, the second (after gfxmode) is a straight one. I commented out the whole line, I tried to add 'set' before gfxmode, neither did any difference. I found another place that said to remove the line from another file 40_custom, but I checked and those files do not contain anything relating to the line we are looking for: gfxmode $linux_gfx_mode Not sure what I am missing, but the file linux.save has recently appeared when searching for the line. Not sure if its just a temp file of some kind. In any case I cannot seem to get it, what am I missing? Thanks! P.S. sorry for any messups in form :)

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • Cannot Enter Repro Admin Web Interface at Port 5080

    - by aqua
    I have followed the instructions on this website www.rtcquickstart.org to set up my firewall, DNS settings, TLS, and have installed the TURN server and repro proxy as instructed, and have restarted repro. However, I am not able to access the web interface of repro on port 5080, either at localhost:5080 / 127.0.0.1:5080 or at the server's IP address: IPADDRESS:5080 (I have set the server's IP for binding in repro.config). I get the browser error message: 'Unable to connect to server' whenever trying to connect to the web interface via port 5080. I initially had Apache2 installed, which loaded pages correctly at port 80 / address root, and when checked it 'listened' at port 5080 after it was configured in /etc/apache2/ports.conf, however the repro web interface still did not work at port 5080. I have tried uninstalling Apache2 in case that was conflicting with repro's web server, but the problem persists, and testing port 5080 now shows that nothing is 'listening' on port 5080. I have tried reinstalling / purging repro but it has not helped. My router is correctly set to allow all ports; port 5080 is open and forwarding correctly. I can connect to the internet and ping all websites through the server and everything else is working correctly. I would be gateful if anyone could offer advice on how to solve this problem.

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >