Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 101/145 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • What could be wrong with my VLAN?

    - by Matt
    I've got a VLAN 10 setup as a management VLAN. The management VLAN comes off port 48 and links to another set of switches that do not support VLAN's so it was I believe set up as an untagged access port. In the past this was a different brand of switch and this worked fine. However, since changing to the HP V1910-48G series I can't seem to get this working. I must point out that as far as I'm aware it is wired up properly (I can't check this physically as I'm working remote and have asked the tech who's got access to double check for me). Now I don't have a huge amount of experience with VLAN environments but AFAIK this is right. I've set the port 48 (linked to the management switches) as an untagged port with PVID 10 and access link type. Is this all I'd need to do from a configuration perspective to ensure all devices connected to port 48 would end up being on VLAN 10 and not needing to tag their frames. i.e. the tag would be added by the switch before being forwarded.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • Viewing Postscript (or PDF) on OS X: Aliasing issues

    - by mankoff
    I am generating postscript graphics and am trying to find a balance between non-aliasing and over-aliasing. If I use the raw ghostscript viewer gs on the Postscript, it looks good. The text appears anti-aliased, but the image remains nice and blocky. Unfortunately, gs has no real user interface and loses all of the nice things that Preview.app has. I could install gv, but the dependency bloat is huge! It requires all of gnome. And even that isn't a great viewer compared to Preview.app or Skim.app. Here is an image viewed with gs: From a user-interaction and Mac-ish perspective, Preview.app (or Skim.app is a much nicer program to use. They have the option to turn on or off aliasing, but neither option looks very good. Which aliasing on, the image is blurry. When it is off, the graphic matches what is seen from gs, but there are two issues. Minor issue: the font is ugly. Uglier than with gs. Major issue: Every PDF is un-aliased, making it hard to read regular PDFs full of text. So, in summary: Is there a way to manually generate the PDF from the PS that overcomes these issues? Is there a way to find a middle ground of alias/unalias with Preview.app? Is there another app that displays with quality like gs, but has a decent UI like Skim.app or Preview.app Is there a way to have Preview.app turn off aliasing for only one file (containing graphics) but leave it enabled in general so that text PDFs are still readable?

    Read the article

  • Drive letter not appearing after heat-related crash

    - by NickAldwin
    I recently had my old PC (has 3 physical hard drives partitioned into 6 partitions) off while on vacation. When I came back, I turned it on. I hadn't realized the room was warmer than it usually is due to hot weather while I was away. The computer was extremely slow to start up, then it crashed. When i rebooted, it got halfway through chkdsk on one of the non-system partitions, then crashed again. I opened it up and felt the hard drives and immediately shut down the computer and moved it to my basement to cool down because it was so hot. I left it there for a length of time while I reinstalled the A/C. I have now turned it on again. It is working fine, and every drive except for the one with the partition that was being checked has appeared in Windows. I scheduled chkdsk for all of the other partitions anyway, just in case, but I'm worried about that drive. I'm pretty sure the drive itself hasn't broken but that crashing in the middle of a chkdsk repair may have corrupted the data. What would you do in this situation? Most of the data on that drive was backed up, so it's not a huge deal if I lost it, but I'd like to get it back if I could. I also would love to regain usability of the drive, even if I have to wipe it -- but that's a last-resort sort of thing. What do you suggest I do?

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

  • How to kill ostensibly immortal process?

    - by DeeDee
    I had some huge file transfers operating on an NFS mount. The server on which the mount point resided was carelessly rebooted, and now the server from which these large transfers were initiated seems to be bogged down by them. If I run top, I see the following: The first thing I tried was to run kill with each the -1 -2 -9 and -15 flags, and each of the process ids shown above in turn. This allowed me to proceed, but didn't kill the processes. The next thing I attempted was to reboot the server, but neither reboot nor shutdown -r now worked. When I ran shutdown -r now the standard broadcast message was sent out, but the sever did not reboot. I confirmed this by looking at the server uptime, which was 25 days. So now I'm a little stuck. I'm running these commands as root. EDIT: Here's another interesting tidbit: In top, I don't see that any other processes are using more than a fraction of a percent of memory or more than 5% of CPU. EDIT 2: output of /var/log/messages

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • How to verify TRIM/discard on encrypted swap?

    - by svarni
    I am using an encrypted swap partition via ecryptfs-setup-swap on my Ubuntu 13.04 computer using a SSD. I have manually set up trim for my ext4 root partition (simply by adding the "discard" option in /etc/fstab). I also manually ran fstrim on the root partition prior to booting and using dstat I saw that for a few seconds several GB/s of data have been written to the disk. That was presumably the effect of the trim command. These high writerates are reproducable by deleting huge files and have not occured before setting up trim, so I take them as evidence for working trim/discard. Manually enabling trim on my root partition has stopped the wearout of my precious new disk from 365 used reserved blocks (out of 6176 total) within three months down to 0 additional used reserved blocks within three additional months (data from SMART attributes). Because I want to minimize the wearout of my SSD I now would like to know whether my swap partition (which is encrypted using ecryptfs-setup-swap) also makes use of the trim/discard option. I tried sudo swapon -d -v /dev/mapper/cryptswap1 but did not receive particular information ("-v") about whether trim/discard ("-d") was applied. If unsupported, i would expect a message. Then I tried sudo dd if=/dev/sda6 count=1 BS=1M | xxd | less directly after booting and when no swapspace was used but I saw not only zeroes. I assume, when looking at freshly trimmed regions, the disk would send zeroes instead of reading random sectors (and according to some forums, (unencrypted) swap space is trimmed once upon boot). Long story short: Are there any ideas on how to test if trim is effectively used for my encrypted swap? And if not, any ideas on how to - at least manually, for once - trim the whole swap space? I wouldn't want to tinker with the partition itself, because I dont know if it needs to be reinitialized as (encrypted) swap - I dont want to be left with an unbootable system :)

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Extracting information from active directory

    - by Nop at NaDa
    I work in the IT support department of a branch of a huge company. I have to take care of a database with all the users, computers, etc. I'm trying to find a way to automatically update the database as much as possible, but the IT infrastructure guys doesn't give me enough privileges to use Active Directory in order to dump the users, nor they have the time to give me the information that I need. Some days ago I found Active Directory explorer from Sysinternals that allows me to browse through Active Directory, and I found all the information that I need there (username, real name, date when it was created, privileges, company, etc.). Unfortunately I'm unable to export the data to a human readable format. I'm just able to take a snapshot of the whole database in a machine-readable format. Doing the snapshot takes hours and I'm afraid that the infrastructure guys won't like me doing entire snapshots on a regular basis. Do you know of any tool (command-line is preferable) that would allow me to retrieve the values of the keys or export it to XML, CSV, etc?

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • Apache using 100% CPU, once again

    - by CBenni
    Recently, apache2 started using 100% of CPU power: top gives me From other, similar threads, I took the tip to use mod_status. Aside from HUGE amounts of NULL requests, it gives: CPU Usage: u2.16 s1.32 cu0 cs0 - .0835% CPU load 1.2 requests/sec - 17.6 kB/second - 14.6 kB/request 8 requests currently being processed, 42 idle workers The access and error logs do not show anything surprising or intriguing at all. Note the .8% CPU usage. Another tip was to use strace: root@server:~# strace -p 1956 Process 1956 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...> And remains like this for at least half an hour, without producing any additional output. Restarting apache fixed the problem for less than a second The server runs a few custom python scripts aswell as a django-powered website on apache2 (up-to-date), but even turning the scripts off (or not having them active in the first place) did not change anything. After I stopped apache and powered my server off, powered it on a few minutes afterwards and restarted all my services, the CPU usage remained low for several hours, just in order to pop up again randomly (?) The DigitalOcean CPU stats on my server are: You can see how the CPU usage was super high for almost half a day until I restarted the bot - just to remain stable for several hours and then pop up again. I am completely at a loss of words and don't know what I could do to find out what piece of my code is giving me these problems or if apache itself is the cause... Therefore I would greatly appreciate any hints to the questions: What else can I try to do? Which things might I not have checked? Is this definitely in my own code? How do you find what part of python code crashes an app via a infinite loop or similar?

    Read the article

  • Mail-Merge on Steroids: Can Word 2003 do this?

    - by richardtallent
    I have a huge report to put together, made up of over 1,000 smaller, nearly-identical reports. Each report includes: General 1:1 information (basic mail-merge stuff) Lots of text, some of which may need to be disabled or have alternate text based on a boolean field. A few embedded images, preferably loaded via HTTP URL, but if they have to be on the a file system thing I can do that. (Filenames will be provided as a field in the data source.) Fortunately, all images are roughly the same size/shape. Several 1:m tables with a few fields apiece. The kicker is the master/child tables. I've seen examples for Word 2000 for doing this by left-joining the master and child table and using some IF/THEN logic to know whether to jump to the next master record. But in my case, I have several of these subtables, so that approach won't really work. So, can Word 2003 handle arbitrary master/child tables? If so, how? If not, I considered InfoPath, but I haven't used it before, and it seems to be made for data entry, not long formatted reports. I'm a software developer, so I could always hack something together with a massive VBA macro, or generating the report in HTML on the web server (where the data is coming from anyway). But I'm hoping Word will work without such gymnastics, since it will give the ultimate users of the report template better control over formatting and making minor changes.

    Read the article

  • Finding bluetooth link key in Win7, to double pair a device on dualboot computer

    - by Ilari Kajaste
    How can I dig up the bluetooth link key for a paired device in Win7? Is this something that is dependent on the bluetooth stack I'm using (Toshiba), or is there a generic place to store these in Win7? Note: I'm not talking about the six-digit code usually typed by the user during pairing - that is worthless since it's discarded after pairing process. What I mean is the 128-bit link key that the devices exchange during pairing, and use thereafter to encrypt all their bluetooth traffic. Background: I dualboot Win7 / Ubuntu on my laptop, and I would like to have my phone paired to both OS's. Since the dualbooting computer has only one bluetooth adapter and thus only one bluetooth address, I cannot do two pairings to the phone, since on the second pairing (windows) the phone just replaces the previous pairing (linux) to the same bluetooth address. A thread on Ubuntu forums pointed me to what I have to do - pair first on linux, then on windows, and then replace the link key on linux side with the one windows negotiated. I can find the linux side pairing key from /var/lib/bluetooth/[BD_ADDR]/linkkeys - no problems there. However, on windows side I can't find the key. According to the forum post, on windows side the key should be in SYSTEM\ControlSet002\services\BTHPORT\Parameters\Keys\[BD_ADDR] but while that registry key does exist, it has no subkeys. (And a similar registry path in ControlSet001 didn't have any subkeys either.) One thing I've been instructed to do is to capture all events during pairing with Sysinternals Process Monitor. I did this, but I haven't been able to find any useful information from the captured events, not even by exporting the data to a huge XML and grepping that with the BD_ADDRs (with or without colons). So how could I find the link key for a paired device in Win7? Some reference information: Wikipedia: Bluetooth, Security Now: Bluetooth security

    Read the article

  • passenger and apache memory usage

    - by Brent Faulkner
    On a "CentOS release 6.2 (Final)" server (with Ruby 1.9.3 and Rails 3.2), and using more memory than expected. Looking at passenger-memory-stats I see a couple of HUGE httpd processes... any thoughts on how I can figure out what's going on and reduce the memory usage? Stats are included here... thanks! ---------- Apache processes ----------- PID PPID VMSize Private Name --------------------------------------- 1371 1 202.1 MB 0.1 MB /usr/sbin/httpd 4573 1371 210.2 MB 5.0 MB /usr/sbin/httpd 4778 1371 202.5 MB 0.6 MB /usr/sbin/httpd 4780 1371 217.6 MB 9.4 MB /usr/sbin/httpd 4781 1371 217.1 MB 9.1 MB /usr/sbin/httpd 4856 1371 202.4 MB 0.5 MB /usr/sbin/httpd 4863 1371 204.1 MB 2.1 MB /usr/sbin/httpd 5027 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5043 1371 202.4 MB 0.4 MB /usr/sbin/httpd 5044 1371 205.5 MB 2.7 MB /usr/sbin/httpd 5072 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5084 1371 202.4 MB 0.5 MB /usr/sbin/httpd 32111 1371 1297.0 MB 246.5 MB /usr/sbin/httpd 32579 1371 1914.3 MB 215.5 MB /usr/sbin/httpd ### Processes: 14 ### Total private dirty RSS: 493.42 MB -------- Nginx processes -------- ### Processes: 0 ### Total private dirty RSS: 0.00 MB ----- Passenger processes ----- PID VMSize Private Name ------------------------------- 4180 280.5 MB 24.4 MB Passenger ApplicationSpawner: /var/www/apps/people/current 4345 309.5 MB 53.4 MB Rack: /var/www/apps/people/current 4800 300.2 MB 55.2 MB Rack: /var/www/apps/people/current 4808 297.8 MB 52.5 MB Rack: /var/www/apps/people/current 4815 297.4 MB 52.4 MB Rack: /var/www/apps/people/current 4822 302.7 MB 55.6 MB Rack: /var/www/apps/people/current 22780 209.0 MB 0.0 MB PassengerWatchdog 22783 991.5 MB 1.3 MB PassengerHelperAgent 22785 113.4 MB 1.1 MB Passenger spawn server 22788 144.6 MB 0.0 MB PassengerLoggingAgent 22911 310.4 MB 64.0 MB Rack: /var/www/apps/people/current 22939 311.6 MB 53.5 MB Rack: /var/www/apps/people/current 26175 304.1 MB 55.8 MB Rack: /var/www/apps/people/current 26182 310.4 MB 44.0 MB Rack: /var/www/apps/people/current ### Processes: 14 ### Total private dirty RSS: 513.24 MB

    Read the article

  • Can Windows Media Player create playlists based on folder structure?

    - by Chaulky
    Over the years I've carefully molded my digital media collection into a series of folders that make it easy for me to find what I'm looking for. I recently discovered the awesomeness that is streaming video from Windows 7 Media Player to the PS3 so I can watch it on the big screen without all the hassle of hooking the computer up to the TV. The problem is, I totally lose my carefully crafted folder structure and all my videos become one giant mess again... not cool! As a temporary solution, I've created a few playlists for my favorites (Dexter Season 4, Dexter Season 5, Breaking Bad Season 1, etc.). This is a HUGE pain in the a$$. So, is there a way to get Windows Media Player (on Windows 7) to maintain some sort of folder structure based on the location of the actual video files? So if I have my videos sorted into folders by show and season, Media Player will pick that up and let me browse it in the same way. As an alternative answer, I'll accept suggestions for a program that can also stream to PS3 and has this "folder organization" feature.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >