Search Results

Search found 7470 results on 299 pages for 'storage engines'.

Page 39/299 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • "System volume folder" always appearing in USB storage stick

    - by ?????? Oyewole
    Whenever I move or copy video files from the PC (Windows 8.1) to my USB storage device and plug it into my TV, I always see a system volume folder on the USB device. This folder can be seen on the PC also, if I choose "view protected system files". My flash drive is formatted with a FAT32 file system. The question is, why is this happening on Windows 8.1, since I never had this problem on Windows 8 before upgrading, and how can I disable this feature?    OK, that's two questions.

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • Using Windows Azure storage for backup

    - by Bruno
    I am currently looking at Windows Azure blobs as an option for backing up archive data. I want to be able to upload files from an external windows machine via the internet but I don't know enough about Windows Azure storage to make a decision. Some of the questions I have are How do I upload the files. Is there a client application, can I use robocopy? Would it be fast enough? i.e. Could I download or upload 1TB of data in a week? Is it secure? Hopefully someone smarter than me can help me :-)

    Read the article

  • Accessing a storage-side snapshot of a cluster-shared volume

    - by syneticon-dj
    From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to: expose the shadow copy as a separate LUN offline the original CSV in the cluster un-expose the LUN carrying the original CSV make sure my cluster nodes have detected the new LUN and no longer list the original one add the volume to the list of cluster volumes, promote it to be a CSV copy off the data I need undo steps 5. - 1. to revert to the original configuration This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

    Read the article

  • KVM guest storage difference with NBD and NFS

    - by WojonsTech
    I am setting up my own little private cloud for my own use maybe for a project or to. I am using linux kvm on debian 6. I have 3 servers 2 of them for compute nodes and 1 storage node. I would I have already installed kvm made a few test machines got my networking setup. I have 2 nics on each server 1 nic is for web traffic other nic is for network traffic. My first Idea was to use NFS for storing the guest machines which can range in size, maybe 8gb maybe 100gb, it just depends. I was doing have heard of nbd before seems like it could work but I dont know what the performance differences are and if it will effect my enviroment, nfs looks like it will be easier to use.

    Read the article

  • FreeNAS plugins not able to access storage

    - by dave
    I've just setup a FreeNAS box and have a couple plugins (sick beard and SABnzbd) installed. Both of these have you select a directory where downloads should go. My storage is on /mnt/MediaVolume/ however when I navigate to mnt it's an empty directory. When I SSH to the box though, I can see it just fine. I'm thinking it may have something to do with permissions, but I'm not sure. Any suggestions how to allow these plugins to view/have access? Thank you!

    Read the article

  • Building a Student Storage server

    - by DobotJr
    I work for a school district. I've been put in charge of building a storage server for students. A place for them to work off of from school and home. My challenge is getting this to work from home. At school they login, authenticate, and they get a mapped drive to their folder on the server (S:\fileserver\studentname). My question is how can I make this available to students at home? The server is running Windows Server 2003 R1. I've got PHP, Apache, and MySQL working together. My idea is to write a script that will "crawl" through the directory containing all of the student folders, then create an instance of every file and folder in a MySQL DB. Create a login page that will use LDAP for authentication, and once they login to the server from home, they get a page with folders a files tied to their username. Has anyone out there ever put something like this together??

    Read the article

  • JBOD with PERC H810

    - by primero
    I'm wondering if anybody has ever used Dell storage products like the MD3220 array in a JBOD configuration. From what I can tell only perc h810 will work for external JBOD but that is not terribly specific, and for some reason I couldn't find many examples on the web of people configuring dell storage products as JBOD. My question is: Is it possible to connect to am MD3220 array, or other Dell arrays using a PERC h810 controller and use it as JBOD, and if so do I have to configure every disk in the array as a RAID 0 volume?

    Read the article

  • Luns not taken by the windows server admin

    - by wildchild
    I have a scenario based question...Something , I haven't faced till now ,but i would be interested to know the answer. If i have assigned a luns (say, of 50 GB) and put them in storage group.However, the windows server team did not grab that lun but sent an acknowledgment saying the Luns are alingned. I would like to know what will happen to the Luns that belong to the SG ..in my opinion they will remain in the SG as unassigned Luns ..or is there a possibility that the lUns will move back to the storage.

    Read the article

  • Storing data, cost/gigabyte

    - by Micaela
    Can anyone give me a general estimate for what web-hosts charge for data storage ($/gigabyte)? A shared-webhosting service is what I'm referring to. I have been trying to compare the price for storage offered by business process automation SaaS and now I'm looking more general.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • How do I add additional parameters to query string of a Firefox Search Plugin?

    - by Goto10
    I have just installed the DuckDuckGo add-on in Firefox 11.0, running on XP SP 3. I would like to add additional parameters to the query string. However, any changes I make are not reflected in the query string when doing a search. I found the duckduckgo.xml file at C:\Documents and Settings\User Name\Application Data\Mozilla\Firefox\Profiles\Profile Name.default\searchplugins. I opened it up with Notepad++ and added the line for kl=uk-en: <SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/"> <os:ShortName>DuckDuckGo</os:ShortName> <os:Description>Search DuckDuckGo (SSL)</os:Description> <os:InputEncoding>UTF-8</os:InputEncoding> <os:Image width="16" height="16">data:image/x-icon;base64, -Removed to shorten-</os:Image> <os:Url type="text/html" method="GET" template="https://duckduckgo.com/"> <os:Param name="q" value="{searchTerms}"/> <os:Param name="kl" value="uk-en"/> </os:Url> </SearchPlugin> However, the kl=uk-en parameter does not appear in the query string when searching (despite several Firefox restarts).

    Read the article

  • How is SU indexed so fast on Google?

    - by ekaj
    I just did a quick Google for a question that was 20 minutes old, to look for an answer, and it was already on Google Search - how is this possible? I glanced over this article which seems to suggest that SU has added RSS feeds (which SU has, but when I opened the feed the article says last posted 6 minutes ago, but when Googled it is 11 hours old) - which leads me to think (Based on that article, I don't know much about search indexing but I am reading at the moment) that most of this indexing is done thanks to the sitemap - is there anything else I am unaware of that helps SU questions get on Google so fast?

    Read the article

  • Is there any descent open-source search engine solutions?

    - by Nazariy
    Few weeks ago my friend asked me how hard is it to launch your own search engine service with list of websites that suppose to be crawled time to time. First what come at my mind was Google Custom Search however pricing policy is quite tricky and would drain your budget if you reach 500K queries per year. Another solution I found here was SearchBlox, which can be compared to Google Mini service. It's quite good solution if you planing to cover search over small amount of websites but for larger projects it is not very handy. I also found few other search platforms like Lucene, Hadoop and Xapian which seems to be quite powerful solutions to reach Google search quality, and Nutch as a web crawler. As most of open-source projects they share same problem, luck of comprehensive guidance of usage, examples and it's expected that you are expert in this subject. I'm wondering if any of you using this solutions, which of them would you recommend, and what should I be aware of?

    Read the article

  • Why does Google Chrome ignore "last_known_google_url" property in "Local State" file?

    - by Peter Sivák
    I want to force my Google Chrome web browser (version 21.0.1180.89, 64-bit) to use non-localized search (thus google in english) through address bar, using the default Google search engine. To achieve that, I have to change value of the property last_known_google_url to https://www.google.com/?hl=en& in Local State file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Local State). In that file, there should be the property: "browser": { "last_known_google_url": but it is not. Even if I add there the property, it has no impact on search - Google Chrome does not use the property and still searches in localized version. Another option is to put the property to Preferences file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Default/Preferences) - which works perfectly when I start Google Chrome and do some search - but just after that, the property (actually the whole Preferences file) is overriden, so "the most important" trailing part ?hl=en& of the property value is removed - and without it, the non-localized search does not work anymore. Why does Google Chrome ignore last_known_google_url property in Local State file?

    Read the article

  • How do I change the search engine used by about:home?

    - by Lekensteyn
    Firefox 4's default home page provides a search engine with some snippets below. Is there any way to customize the search engine used through about:config or some other configuration file? localStorage["search-engine"] sometimes gets reset, possibly after a FF update. I would like to avoid creating a greasemonkey script that scripts on about:home. If an extension exist to fulfill the task, I'd be happy too. I'm using Firefox from Kubuntu 11.04 for that matters.

    Read the article

  • Generate metadata of all files in a dir?

    - by nmuntz
    We are working on a project that is quite big, and its stored in an SVN repository under different folders with many files all over the place. Quite often, it is hard to locate the document that has a certain keyword or phrase. Does anyone know of any program that will generate and index the metadata of all the files that are in these documentation folders? (most filetypes are: xls, doc, ppt). Windows Search and Google Desktop could be an option but that would generally index the whole hard drive, emails, etc and thats probably much more than what we need and would not be suited for something more folder specific. Example of what im looking for: a program or webpage where i enter "John Doe" and it will show me all files in MyProjectFolder/ that contain the keyword "John Doe". This of course will already be indexed somewhere so searches should be almost instantaneous. Is there such a tool or i am asking too much? Thanks in advance!

    Read the article

  • Google Chrome custom search engine for secure Wikipedia

    - by gdejohn
    I have this custom search engine set up in Google Chrome: https://encrypted.google.com/search?q=site%3Aen.wikipedia.org+%s&btnI=745 It searches Google for site:en.wikipedia.org {query}, and the btnI=745 is for I'm Feeling Lucky, so it automatically redirects to the first result. I like this better than using Wikipedia's search function directly because it gives me very effective approximate string matching, so I can misspell my search, or leave a word out, or just search for some keywords, and I still get what I'm looking for right away. What I'd like is for it to use Wikipedia's secure gateway: https://secure.wikimedia.org/wikipedia/en/wiki/ It's easy enough to set up a custom search engine that uses the secure version of Wikipedia's search function directly, but I can't figure out how to correctly incorporate it into my version going through Google. Nothing I've tried works.

    Read the article

  • How to modify the language used by the Google Search Engine on IE 9.0?

    - by Seb Killer
    I would like to know how we can modify the settings of the Google Search Engine used in Internet Explorer 9.0 to force to use a specific language. Our problem is the following: as it uses geolocation by default, and we are in Switzerland, it takes the first of the official languages this is Swiss-German. However, we are located in Geneva where French is the official language. Furthermore, as most of our users speak English, we would like to force the language to be English and not Swiss-German. Does anybody know how to achieve this ? Thanks alot, Sébastien

    Read the article

  • How to take search query and append modifers to the end of it

    - by Kimber
    This is a greasemonkey question. What I'm trying to do is modify an old google discussions script. What were wanting to do is be able to take the google search query and add modifiers to the end of it. Like this: search query: "superuser" modifiers: inurl:greasemonkey+question end result: "superuser" inurl:greasemonkey+question The old script creates a new div within the "hdtb_more_mn" element which is where you get the new discussions tab. However, since the "tbm=dsc" option to do a discussion search has died, this script no longer works. Hence the need to add modifiers to your searches. I tried to edit the script, but it appends the modifiers to the end of the url which includes "&client=firefox-a&hs=8uS&rls=org.mozilla:en-US:official". This means you're also searching for the above as well as your query, which doesn't work. I would like to be able to append the modifiers @ the end of the search querty, rather than the whole URL. I'm just not sure how to code it to where it adds the below "&tbm=" stuff within "discussionDiv.innerHTML" to the end of the query. The google search id seems to be, "gbqfq" for the search box, but I'm not sure how to add this id. Here is the old script // ==UserScript== // @name Add Back Google Discussions // @version 1.4 // @description Adds back the Discussion filters to Google Search // @include *://*.google.tld/search* // ==/UserScript== var url = location.href; if (url.indexOf('tbm=dsc') < 0) addFilterType('dsc', 'Discussions'); function addFilterType(val, name) { var searchType = document.getElementById('hdtb_more_mn'); var discussionDiv = document.createElement('DIV'); discussionDiv.className = 'hdtb_mitem'; discussionDiv.innerHTML = '<a class="q qs" href="'+ (url.replace(/&tbm=[^&]*/g,'') + '&tbm=' + val) +'">'+name+'</a>'; searchType.innerHTML += discussionDiv.outerHTML; } Thanks for any help, or suggestions on who to ask. Google Chrome has an extension for discussion searches, but FF doesn't seem to have one as of yet, which is why I'm trying to modify the above.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >