Search Results

Search found 8013 results on 321 pages for 'clean urls'.

Page 62/321 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Rails link to current page and passing parameters to it

    - by Faisal
    I am adding I18N to my rails application by passing the locale using url params. My urls are looking like http://example.com/en/users and http://example.com/ar/users (for the english and arabic locales respectively). In my routes file, I have defined my routes with a :path_prefix option: map.resources :users, :path_prefix => '/:locale' And locale is being set using a before_filter defined in ApplicationController def set_locale I18n.locale = params[:locale] end I also defined ApplicationController#default_url_options, to add locale to all urls generated by the application: def default_url_options(options={}) {:locale => I18n.locale} end What I want is to add a link in the layout header (displayed in all pages) that would link to the same page but with the other locale. For instance, if I am browsing the arabic locale, I want a "English" link in the header, that will redirect me back to my current page, and set the locale to english. Is there a way to do this in rails?

    Read the article

  • SQL Server 2005 jobs running twice in a row - using LiteSpeed

    - by Malnizzle
    Howdy! I have a SQL server (2005) backing up to a network share, who has a group of maintenance plans setup through LiteSpeed to backup different DBs. They were just set up to run two sub plans on different schedules for full/diff backups and did that just fine for a couple of months. Then I added "Clean Up" task to the subplans. Ever since that point, the backup creates another bak right after the first bak job is completed. I removed the clean up item from the subplan, and it still creates two baks when ran. Both the SQL Activity Monitor and the machine's windows application log show just one job being executed. I did this same thing to a couple of other servers backing up to the same location, and they are behaving correctly. Thoughts?

    Read the article

  • Deploying Django at Dreamhost

    - by Imran
    I'm trying to get the Poll tutorial working at my Dreamhost account (I don't have any prior experience of deploying Django). I downloaded the script I found here (http://gabrielfalcao.com/2008/12/02/hosting-and-deploying-django-apps-on-dreamhost/) at my home directory and executed it. Now I have Python 2.5 and Django in ~/.myroot/ and my Django projects directory is ~/projects/ Here's the content of ~/projects/ directory (I copied the polls/ and and templates/polls/ directories myself). projects/ |-- admin_media -> /home/imran2140/.myroot/usr/lib/python2.5/site-packages/django/contrib/admin/media |-- dispatch.fcgi |-- polls | |-- __init__.py | |-- __init__.pyc | |-- admin.py | |-- admin.pyc | |-- models.py | |-- models.pyc | |-- polls.db | |-- urls.py | |-- urls.pyc | |-- views.py | `-- views.pyc |-- script_templates | |-- dispatch.template | `-- htaccess.template `-- templates `-- polls |-- detail.html |-- index.html `-- results.html 5 directories, 17 files Now what should I do to get the Polls app working? Update I finally got a "Hello World" Django app working with Passanger WSGI. It worked fine with both Server's default Python 2.3.5 and my installed Python 2.5.2. Passanger WSGI - Django at Dreamhost Wiki

    Read the article

  • django ignoring admin.py

    - by noam
    I am trying to enable the admin for my app. I managed to get the admin running, but I can't seem to make my models appear on the admin page. I tried following the tutorial (here) which says: (Quote) Just one thing to do: We need to tell the admin that Poll objects have an admin interface. To do this, create a file called admin.py in your polls directory, and edit it to look like this: from polls.models import Poll from django.contrib import admin admin.site.register(Poll) (end quote) I added an admin.py file as instructed, and also added the following lines into urls.py: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', ... (r'^admin/', include(admin.site.urls)), ) but it appears to have no effect. I even added a print 1 at the first line of admin.py and I see that the printout never happens, So I guess django doesn't know about my admin.py. As said, I can enter the admin site, I just don't see anything other than "groups", "users" and "sites". What step am I missing?

    Read the article

  • Tomcat cookies not working via my ProxyPass VirtualHost

    - by John
    Hi there. I'm having some issues with getting cookies to work when using a ProxyPass to redirect traffic on port 80 to a web-application hosted via Tomcat. My motivation for enabling cookies is to get rid of the "jsessionid=" parameter that is appended to the URLs. I've enabled cookies in my context.xml in META-INF/ for my web application. When I access the webapplication via http://url:8080/webapp it works as expected, the jsessionid parameter is not visible in the URL, instead it's stored in a cookie. When accessing my website via an apache2 virtualhost the cookies doesn't seem to work because now "jsessionid" is being appended to the URLs. How can I solve this issue? Here's my VHost configuration: <VirtualHost *:80 ServerName somedomain.no ServerAlias www.somedomain.no <Proxy * Order deny,allow Allow from all </Proxy ProxyPreserveHost Off ProxyPass / http://localhost:8080/webapp/ ProxyPassReverse / http://localhost:8080/webapp/ ErrorLog /var/log/apache2/somedomain.no.error.log CustomLog /var/log/apache2/somedomain.no.access.log combined </VirtualHost

    Read the article

  • Uninstall last n of ports/packages

    - by Radio
    While compiling some port, I realized that it depends on 1000+ of other ports and will install forever until I die or my disk is full (my hdd is really small). I interrupted make install clean. How do I uninstall and clean those dependencies which have already been built and installed? (there are at least 100+ of them) pkg_cutleaves wont work in this case, since the main port wasn't registered yet. Please help. FreeBSD 9.0-RELEASE amd64 EDIT: Another way to ask this question: How can I see all dependencies for a non-registered port, and all subdependencies for those dependencies, independent with previously installed ports or their [sub]dependencies?

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • Double-byte characters in querystring using PHP

    - by Jeffrey Berthiaume
    I'm trying to figure out how to create personalized urls for double-byte languages. For example, this url from Amazon Japan has Japanese characters within the querystring (specifically, the path): http://www.amazon.co.jp/????????-DVD-???/dp/B00005R5J3/ref=sr_1_3?ie=UTF8&s=dvd&qid=1269891925&sr=8-3 What I would like to do is have: http://www.mysite.com/???????? or even http://www.mysite.com/index.php?name=???????? be able to properly decode the $GET[name] string. I think I have tried all of the urldecode and utf8_decode possibilities, but I just get gibberish in response. This all works fine in a form $_POST, but I need these urls to be emailable...

    Read the article

  • Exclude filter from certain url's

    - by Mads Mobæk
    I'm using a filter in web.xml to check if a user is logged in or not: <filter> <filter-name>LoginFilter</filter-name> <filter-class>com.mycompany.LoginFilter</filter-class> </filter> <filter-mapping> <filter-name>LoginFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> And this works like a charm until I have a stylesheet or image I want to exclude from this filter. I know one approach is to put everything that's protected inside /privateor similar, and then set the url-pattern to: <url-pattern>/private/*</url-pattern>. The downside to this is my URLs now looking like: http://www.mycompany.com/private/mypage instead of http://www.mycompany.com/mypage. Is there another solution to this problem, that let me keep my pretty-urls?

    Read the article

  • Overwrite archetypes in Maven

    - by Random
    Hello again! I'm having some trouble using Maven for my archetypes and I will need to overwrite some. I launch an instruction that does an archetype:generate in an archetype already existing directory. Is there a parameter that let's me overwrite existing archetypes? I have search the maven definitve guide but it states that the only parameters accepted are: -DgroupId -DartifactId -Dversion -DpackageName -DarchetypeGroupId -DarchetypeArtifactId -DarchetypeVersion -DinteractiveMode I could just search the directory and delete the files, but this proccess is going to be done automatically (so no human involved, no brains involved) and I wouldn't like he machine deleting things around. Thanks for all! Edit: I almost forgot, here is some maven trace: [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [INFO] Archetype defined by properties [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating OldArchetype: archetype-foo-lib:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: foo.tecnologia [INFO] Parameter: packageName, Value: foo.tecnologia [INFO] Parameter: basedir, Value: C:\temp\Desarrollo [INFO] Parameter: package, Value: foo.tecnologia [INFO] Parameter: version, Value: 1.0 [INFO] Parameter: artifactId, Value: Foo-Lib-Test [ERROR] Directory Foo-Lib-Test already exists - please run from a clean directory org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory at org.apache.maven.archetype.old.DefaultOldArchetype.createArchetype(DefaultOldArchetype.java:242) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.processOldArchetype(DefaultArchetypeGenerator.java:253) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:143) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:286) at org.apache.maven.archetype.DefaultArchetype.generateProjectFromArchetype(DefaultArchetype.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execute(CreateProjectFromArchetypeMojo.java:184) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at com.foo.model.CSMavenCli.main(CSMavenCli.java:391) at com.foo.model.MavenAdmin.generateArchetype(MavenAdmin.java:399) at com.foo.model.ValidarPom.validarPom(ValidarPom.java:167) at com.foo.prueba.GenerarPOM.execute(GenerarPOM.java:93) at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58) at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67) at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] : org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory Directory Foo-Lib-Test already exists - please run from a clean directory [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Fri Apr 09 10:01:33 CEST 2010 [INFO] Final Memory: 15M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • Upgrade to Genuine Windows 8 Pro from non genuine Windows 7

    - by mark
    I have a computer with non-genuine windows 7 (cracked with windows loader). I was thinking of buying / upgrading to Windows 8 Pro. I ran Windows8-UpgradeAssistant.exe and was said that I can upgrade to Windows 8 Pro. Can I perform a clean upgrade (format and install) from my current windows 7 to windows 8? In future, in order to re-install Windows 8 do I need to re-install the non-genuine Windows 7 and install on top of it? If my hard disk crash, or I want to install on a new hard disk (clean install), do I need to install windows 7 again before upgrading to Windows 8? If I don't like Windows 8, can I downgrade to Windows 7 genuine?

    Read the article

  • SVN multiple repositories in subfolders

    - by fampinheiro
    I'm using apache+svn apache config file: LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /code> DAV svn SVNParentPath "c:/repositories" </Location> Imagine i have this file structure (in every t? i have one svn repository) c repositories uc1 0809v t1 t2 t3 0809i t1 t2 uc2 t1 t2 t1 I can access the repositories using: svn://domain.com/code/uc1/0809v/t1 svn://domain.com/code/uc1/0809v/t2 svn://domain.com/code/uc1/0809v/t3 I want to access them using the urls: http://domain.com/code/uc1/0809v/t1 http://domain.com/code/uc1/0809v/t2 http://domain.com/code/uc1/0809v/t3 and see the content of the repository in the browser. If i create the repository on the root of the svn folder i can see the repository (http://domain.com/code/t1) when i try the other urls i get the error Could not open the requested SVN filesystem My question is, It is possible to do a search in all subfolders looking for svn repositories?

    Read the article

  • droid cam makefile understanding and error

    - by nerorevenge
    I tried installing the droid cam on my fedora 19 (64 bit) . Link to the droid cam application is here and whenever I try to install it , the Makefile which is as follows is invoked obj-m := v4l2loopback-dc.o all: make -C /lib/modules/`uname -r`/build M=`pwd` test: gcc test.c -o test clean: make -C /lib/modules/`uname -r`/build M=`pwd` clean insmod: sudo insmod v4l2loopback-dc.ko width=320 height=240 rmmod: sudo rmmod v4l2loopback-dc.ko and here is the error -- INSTALL: Webcam parameters: '320' and '240' -- INSTALL: Building v4l2loopback-dc.ko make -C /lib/modules/`uname -r`/build M=`pwd` make: *** /lib/modules/3.9.5-301.fc19.x86_64/build: No such file or directory. Stop. make: *** [all] Error 2 -- INSTALL: v4l2loopback-dc.ko not built.. Failure build happens to be a symbolic link.I was wondering what exactly is the makefile trying to and why is it failing?

    Read the article

  • Taking video from video camera and displaying it with MPMoviePlayerController IPhone SDK

    - by Daniel
    Has anyone tried taking a video from the camera and then using the video player provided to play it? When you take the video in portrait mode, sometimes the movie will play (when the player puts it in landscape mode) and when it puts it in portrait mode you cannot view the movie all you hear is sound,sometimes in landscape mode is flickers and does not play right, has anyone encountered this and found a way to fix it? My code to play the video looks like this: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { NSURL *urls=[info objectForKey:@"UIImagePickerControllerMediaURL"] ; moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:[info urls]]; if (moviePlayer) { [moviePlayer play]; } } I checked settings on the docs nothing seems like it would fix this...Thanks

    Read the article

  • How can I get more info on high-CPU rundll32.exe process?

    - by Herb Caudill
    I recently clean-installed Win7 on my HP8530. Everything works well most of the time, but for the last few days, every morning after my computer has been idle overnight, I find that rundll32.exe is consuming a steady 50% of CPU (i.e. all of one processor). The only way I can make it go away is by restarting. Process Explorer has no information on what the process is running. If I try to do anything to rundll32.exe (kill process, suspend, etc.) I get "Error opening process: Access is denied." None of the tabs in the ProcExp properties dialog has any information at all. I have Norton Internet Security running with the latest definitions; I've run a full system scan and it gives me a clean bill of health. How can I get more information on why this process is running?

    Read the article

  • SEO Canonical Issue resolution on iis

    - by kacalapy
    i have a site running on IIS that i have Canonical Issue with. the error is: The page with URL "http://www.site.org/images/join_forum.gif" can also be accessed by using URL "https://www.site.org/images/join_forum.gif".Search engines identify unique pages by using URLs. When a single page can be accessed by using any one of multiple URLs, a search engine assumes that there are multiple unique pages. Use a single URL to reference a page to prevent dilution of page relevance. You can prevent dilution by following a standard URL format. how can i resolve this?

    Read the article

  • deleting old unused images

    - by Ayyash
    As we move on with our content-based websites, lots of images get dumped in our images folder, but we rarely come across self-committed monkyes that delete their files once they do not need it, which means, we end up with a huge list of images in one folder, and it is very tricky to clean it up. My question is (and i dont know if this is the right website to ask it), is there a tool that allows me to find out if an image has been requested by web in the last (n) months? my other general question is, how do you do it? how do you take control of your images folders? what policy do you enforce on developers to clean up? what measures do you take in order to decide what goes and what stays if you end up with an out-of-control situation? my suggestion was to rename the images folder, create a new one, copy the basic ones and wait for someone to complain about a broken image! :) i find this to be the most efficient.

    Read the article

  • Best Practices: How can admin deploy software to 100s of PC ?

    - by Gopal
    Hi ... The Environment: I am working for a college. We have a couple of labs (about 100 PCs) for students. At the end of the semester, the PCs will be full of viruses, corrupt system files, all sorts of illegal downloads etc. (everything you can expect from a student environment). At the end of the semester, we would like to wipe out all the systems and do a clean install (WindowsXP + a set of application suites) to get ready for the next batch of students. Question: Is there any free software that will enable an admin to deploy a clean disk image to all the PCs in one go?

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • Getting error 0x000003eb when installing DDK sample printer drivers

    - by Andy
    I've got a development machine, which has been severly abused when it comes to installing and removing printer drivers. I'm now at the stage where I want to install some sample printer drivers from the DDK (WDK), but unfortunately I get the message 'Unable to install printer. Operation could not be completed (error 0x000003eb). So I tried installing the same printer driver built from the DDK in a clean Win 7 x64 VM, and it works, so the only thing I can imagine is that the driver store or driver folder may be slightly corrupt from the many previous printer drivers I had installed. So my question is, is there anyway I can clean my system of old printer drivers / file? Or any repair functionality in windows that may replace the common windows printer drivers?

    Read the article

  • thunderbird - how to have emails erased automatically removed from gmail?

    - by pixeline
    I understand that by connecting via imap to my gmail account with thunderbird, whatever i do in thunderbird would be reflected in gmail. Yet, if i delete emails in thunderbird, those emails are still present in gmail, even after a while. I set my account preferences in thunderbird under the option "leave messages on the server?" to: "until i delete them". Did i understood imap wrong, or did i set up something wrong? I would really like to be able to clean up my gmail account via a desktop client, such as thunderbird, as it is much more responsive, and i have 4gb of messages to clean up.

    Read the article

  • How can I block based on URL (from address bar) in a safari extension

    - by PerilousApricot
    I'm trying to write an extension that will block access to (configurable) list of URLs if they are accessed more than N times per hour. From what I understand, I need to have a start script pass a "should I load this" message to a global HTML page (who can access the settings object to get the list of URLs), who will give a thumbs up/thumbs down message back to the start script to deny/allow loading. That works out fine for me, but when I use the usual beforeLoad/canLoad handlers, I get messages for all the sub-items that need to be loaded (images/etc..), which screws up the #accesses/hour limit I'm trying to make. Is there a way to synchronously pass messages back and forth between the two sandboxes so I can tell the global HTML page, "this is the URL in the window bar and the timestamp for when this request came in", so I can limit duplicate requests? Thanks!

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Easiest way to replace preinstalled Windows 8 with new hard drive with Windows 7

    - by Andrew
    There are all kinds of questions and answers relevant moving Windows 8 to a new hard drive. I'm not seeing anything quite applicable to my situation. I have a new, unopened, unbooted notebook with pre-installed Windows 8. I will be replacing the hard drive before ever booting, unless that is not possible for some reason. I want to "downgrade" to Windows 7 Pro, and I want a clean installation. To do so legitimately, I apparently either need to: Upgrade Windows 8 to Windows 8 Pro using Windows 8 Pro Pack, then downgrade; or Just install a newly-licensed copy of Windows 7 Pro. (Let me know if I've missed an option.) Installation media is likely not a problem, though if I need something vendor-specific that I cannot otherwise download, that could present an issue (Asus notebook, if that matters). If I could, I would just buy the Pro Pack upgrade, swap the hard drive (without ever booting), then install Windows 7 Pro directly on the new hard drive, using the Pro Pack key for activation. Will this work? Are there any activation issues? Edited to clarify, as some comments and answers indicate confusion: Here is, ideally, what I want to do: Before ever powering on the notebook, remove the current hard drive. Replace this hard drive with a new, blank hard drive. Install a clean copy of Windows 7 Pro on this new, blank hard drive. Unless I have no choice to accomplish the end result (a clean install of Win7 Pro on the newly-installed, previously-blank hard drive), I am not wanting to: Install Windows 7 "over" the current Windows 8 install (after upgrading to Win8 Pro). That would involve using the currenly-installed hard drive. I want to use a new, different hard drive. Copy the Win8 install to the new hard drive, then install Windows 7 "over" that installation. Install Windows 7 "over" the current Windows 8 install (after upgrading to Win8 Pro), then copy the installation to the new hard drive. If I have to use one of those three options, I will, but only if there is no other choice. Please note that this question is not about licensing: I will purchase the necessary license(s) to accomplish this procedure legally (apparently either Win8 Pro Pack or Win7 Pro -- the former currently appears less expensive).

    Read the article

  • Deploy virtual PC based on snapshot using Hyper-V

    - by user27786
    I have a server with Hyper-V configured, it is easy to create a new virtual machine using the management tools, but it can take some time. I often see those hosts that say "We can have your server ready in 5 seconds". And I wonder how do they manage this? I would also like to deploy a full functional clean server in 5 seconds. So how is this done? Is it by first installing a clean server, then take a snapshot of that for later to deploy a new server based on that snapshot? I tried doing this, but I did not find any where to install a new virtual machine based on the snapshot. Anyone got any thoughts to share on this?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >