Search Results

Search found 743 results on 30 pages for 'karl brown'.

Page 5/30 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Create array of objects based on another array?

    - by xckpd7
    I want to take an array like this: var food = [ { name: 'strawberry', type: 'fruit', color: 'red', id: 3483 }, { name: 'apple', type: 'fruit', color: 'red', id: 3418 }, { name: 'banana', type: 'fruit', color: 'yellow', id: 3458 }, { name: 'brocolli', type: 'vegetable', color: 'green', id: 1458 }, { name: 'steak', type: 'meat', color: 'brown', id: 2458 }, ] And I want to create something like this dynamically: var foodCategories = [ { name: 'fruit', items: [ { name: 'apple', type: 'fruit', color: 'red', id: 3418 }, { name: 'banana', type: 'fruit', color: 'yellow', id: 3458 } ] }, { name: 'vegetable', items: [ { name: 'brocolli', type: 'vegetable', color: 'green', id: 1458 }, ] }, { name: 'meat', items: [ { name: 'steak', type: 'meat', color: 'brown', id: 2458 } ] } ] What's the best way to go about doing this?

    Read the article

  • excel change 4 rows / 48 col to 48 rows / 4 col

    - by GoodOlPete
    Hi, I've selected 4 database records of 48 fields into excel as below: FirstName LastName Age Address1 ....................... Andy smith 23 53 high st billy ball 43 23 the avenue charles brown 76 rose cottage dave green 43 station rd I want to display them as firstname andy billy charles dave lastname smith ball brown green age 23 43 76 43 address1.............................. Can anyone suggest how to do this?

    Read the article

  • How to search for alphanumeric word before or after a keyword in perl?

    - by aliocee
    I have sentences as shown in the below examples: $sen1 = "The quick brown fox jump KEYWORD over123 the3 lazy dog, fox is quick"; $sen2 = "The quick brown fox jump123 KEYWORD over the lazy dog, fox is quick"; i want to use the keyword 'KEYWORD' as my search string to extract the alphanumeric words before and after the search string using Perl regular expression. sample output: over123 jump123 NB: The word 'the3' is left out because i'm only searching for alphanumeric words exactly before or after the 'KEYWORD'. Thanks

    Read the article

  • Advanced SQL query with lots of joins

    - by lund.mikkel
    Hey fellow programmers Okay, first let me say that this is a hard one. I know the presentation may be a little long. But I how you'll bare with me and help me through anyway :D So I'm developing on an advanced search for bicycles. I've got a lot of tables I need to join to find all, let's say, red and brown bikes. One bike may come in more then one color! I've made this query for now: SELECT DISTINCT p.products_id, #simple product id products_name, #product name products_attributes_id, #color id pov.products_options_values_name #color name FROM products p LEFT JOIN products_description pd ON p.products_id = pd.products_id INNER JOIN products_attributes pa ON pa.products_id = p.products_id LEFT JOIN products_options_values pov ON pov.products_options_values_id = pa.options_values_id LEFT JOIN products_options_search pos ON pov.products_options_values_id = pos.products_options_values_id WHERE pos.products_options_search_id = 4 #code for red OR pos.products_options_search_id = 5 #code for brown My first concern is the many joins. The Products table mainly holds product id and it's image and the Products Description table holds more descriptive info such as name (and product ID of course). I then have the Products Options Values table which holds all the colors and their IDs. Products Options Search is containing the color IDs along with a color group ID (products_options_search_id). Red has the color group code 4 (brown is 5). The products and colors have a many-to-many relationship managed inside Products Attributes. So my question is first of all: Is it okay to make so many joins? Is i hurting the performance? Second: If a bike comes in both red and brown, it'll show up twice even though I use SELECT DISTINCT. Think this is because of the INNER JOIN. Is this possible to avoid and do I have to remove the doubles in my PHP code? Third: Bikes can be double colored (i.e. black and blue). This means that there are two rows for that bike. One where it says the color is black and one where is says its blue. (See second question). But if I replace the OR in the WHERE clause it removes both rows, because none of them fulfill the conditions - only the product. What is the workaround for that? I really hope you will and can help me. I'm a little desperate right now :D Regards Mikkel Lund

    Read the article

  • How to store array data in MySQL database using PHP & MySQL?

    - by Cyn
    I'm new to php and mysql and I'm trying to learn how to store the following array data from three different arrays friend[], hair_type[], hair_color[] using MySQL and PHP an example would be nice. Thanks Here is the HTML code. <input type="text" name="friend[]" id="friend[]" /> <select id="hair_type[]" name="hair_type[]"> <option value="Hair Type" selected="selected">Hair Type</option> <option value="Straight">Straight</option> <option value="Curly">Curly</option> <option value="Wavey">Wavey</option> <option value="Bald">Bald</option> </select> <select id="hair_color[]" name="hair_color[]"> <option value="Hair Color" selected="selected">Hair Color</option> <option value="Brown">Brown</option> <option value="Black">Black</option> <option value="Red">Red</option> <option value="Blonde">Blonde</option> </select> <input type="text" name="friend[]" id="friend[]" /> <select id="hair_type[]" name="hair_type[]"> <option value="Hair Type" selected="selected">Hair Type</option> <option value="Straight">Straight</option> <option value="Curly">Curly</option> <option value="Wavey">Wavey</option> <option value="Bald">Bald</option> </select> <select id="hair_color[]" name="hair_color[]"> <option value="Hair Color" selected="selected">Hair Color</option> <option value="Brown">Brown</option> <option value="Black">Black</option> <option value="Red">Red</option> <option value="Blonde">Blonde</option> </select> Here is the MySQL tables below. CREATE TABLE friends_hair ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, hair_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) ); CREATE TABLE hair_types ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, friend TEXT NOT NULL, hair_type TEXT NOT NULL, hair_color TEXT NOT NULL, PRIMARY KEY (id) );

    Read the article

  • (Python) Converting a dictionary to a list?

    - by Daria Egelhoff
    So I have this dictionary: ScoreDict = {"Blue": {'R1': 89, 'R2': 80}, "Brown": {'R1': 61, 'R2': 77}, "Purple": {'R1': 60, 'R2': 98}, "Green": {'R1': 74, 'R2': 91}, "Red": {'R1': 87, 'Lon': 74}} Is there any way how I can convert this dictionary into a list like this: ScoreList = [['Blue', 89, 80], ['Brown', 61, 77], ['Purple', 60, 98], ['Green', 74, 91], ['Red', 87, 74]] I'm not too familiar with dictionaries, so I really need some help here. Thanks in advance!

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Java applet needs permission but doesn't ask for it!

    - by Karl Jóhann
    I'm trying to connect a VPN connection (on Mac OS X 10.6.6) through a Check point java applet. The first time it ran I chose NOT to give it access to my files and such and now every time I try to lunch the applet it tells me too "Please confirm the use of this Java applet and then refresh or reopen the window." But I don't know how to confirm it nor delete the applet. How can I change the permissions afterwards and where can I find java applets installed on my computer? Update: This turns out to be a problem in Firefox. Cleared cookies, Java cache and certificate in Safari and it seems to work.

    Read the article

  • which performance counters mainly matter for windows server performance?

    - by Karl Cassar
    We have a website which is sometimes performing slowly, and / or completely hangs. I have setted up temporarily the default system performance data collector in Performance Monitor, to see if this can shed some light. However, the default Data Collector set collects a huge amount of counters, as well as generates huge logs files. Just 8 hours of data resulted in 4GB of data. Which performance counters matter the most, when judging server load? Also, is it a performance concern if one leaves such data-collectors running indefinitely? Obviously, I will not know when the server will experience slow performance, so I need the logs there so that I can check them out. Any other specific guidelines on monitoring server performance would be greatly appreciated. OS is a Windows Server 2008 R2 (Web Edition).

    Read the article

  • core temperature vs CPU temperature

    - by Karl Nicoll
    I have recently installed a new heat sink & fan combination on my Core 2 Quad since my CPU was hitting about 70C under load. This has managed reduce temperatures while running Prime95 to about 54C, which I'm taking as a win (this is ~30 minutes after fitting). I'm a little confused though. The temperature readings given above are for CORE temperatures, but HWMonitor is showing a 5th "CPU" temperature (4 temperatures being the individual core temps) which is showing 21C idle, when idle temperatures for the cores vary between 37C and 42C. I guess there are two questions here: Are my CPU/Core temperatures decent, and is it safe to overclock when these are stock clock temperatures? I gather that the maximum safe operating temperature for a C2Q is ~70C, so which temperature should I measure against, the core temperatures (which are higher), or the CPU temperature reading?

    Read the article

  • How do I install ant on OS X Mavericks?

    - by Robert Karl
    After upgrading to OS X 10.9 Mavericks, ant is no longer on my path. [126] 11:23:26 rkarl-mba-4:~/mobile-baselayer > ant zsh: permission denied: ant [126] 11:23:50 rkarl-mba-4:~/mobile-baselayer > which ant ant not found I tried installing through homebrew [126] 11:23:09 rkarl-mba-4:~/mobile-baselayer > brew install ant Error: No available formula for ant It's odd that homebrew doesn't have a formula for that.... After googling, I found this article, which suggested using a user's custom formula for brew. [1] 11:23:56 rkarl-mba-4:~/mobile-baselayer > brew install https://raw.github.com/adamv/homebrew-alt/master/duplicates/ant.rb curl: (22) The requested URL returned error: 404 Not Found Error: Failure while executing: /usr/bin/curl -f#LA Homebrew\ 0.9.4\ (Ruby\ 1.8.7-358;\ Mac\ OS\ X\ 10.9) https://raw.github.com/adamv/homebrew-alt/master/duplicates/ant.rb -o /Library/Caches/Homebrew/Formula/ant.rb Any help would be appreciated!

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • Issue with Ivan Heckman's allSnap

    - by karl
    For the longest time I have used Ivan Heckman's allSnap program to better manage Windows on my pc by making them easily snap together, instead of overlapping. However on Windows 8 I cannot seem to get this to work. I suspect it has something to do with how Win8 boarders seem to have a transparent pixel around the outside of the window padding boarder, but overall I would love to get the snapping functionality back if it is at all possible. It's very hard trying to find information about this online as all I find are posts talking about snapping Metro apps to the side of the screen in Desktop Mode.

    Read the article

  • Connection Issue

    - by Karl Schneider
    Desktop computer, connected directly into a Comcast modem. Every so often, at seemingly random intervals, my connection will drop. This could be while in the middle of browsing, or when I'm not even at the computer. When the connection drops, the modem still shows 4 green lights. The modem is connected to a splitter (cable and internet in same room), and then directly to the wall. To recover from the problem, I am forced to restart my computer, at which point everything works fine again. I have tried an ipconfig/release and renew, it tells me that it is unable to contact the DHCP server, and thus can't renew. I have updated the NICs driver, no luck. I have changed the ethernet cord, no luck. I have had Comcast replace the modem, no luck. The only thing I can think of that hasn't been replaced is the cords connecting the modem to the wall and the splitter. Can anyone think of anything else I may be able to do to isolate what's causing the issue?

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • Google Chrome freezing when I open Bookmark Manager

    - by Karl Cassar
    I have an issue with Google Chrome freezing whenever I open the Bookmark Manager. Only that particular tab freezes, and I can still use the other tabs. No bookmarks appear, and I cannot type in the 'Search bookmarks' field. This seems to be related with my logged in profile. If I change profile, it allows me to login. I've also tried to login with my profile on different computers using Chrome, and it also freezes. However, I can still add bookmarks from the bookmarks tab. I just cannot use the Bookmark Manager. Any ideas what I can do? Is it possible to somehow export my bookmarks, reset the profile bookmarks (without losing other information like extensions etc), and re-import them?

    Read the article

  • performance monitor in iis 7 to monitor which website is using most resources (asp.net)

    - by Karl Cassar
    I am using Windows Server 2008 R2 and IIS 7.5, and am hosting multiple websites on the same webserver. Is it possible to use Performance Monitor to know on average which website is using the most resources? I've added a user-defined Data Collector Set in Performance Monitor collecting data for 1 day. However, I could not find any details which hint which website is using the most resources. Which counters are crucial to monitor websites? The generated report tells me that the top process is w3wp##1 - how can I know which website it corresponds to? I've also tried to add counters for ASP.Net Applications for all object instances, however % Managed Processor Time (estimated) is 0 at all times.

    Read the article

  • Finding a message in an archive in Kerio MailServer 6

    - by Karl Cassar
    I need to locate some emails from the archive. Kerio is set to archive emails on a daily basis, keeping the last 2 months. From the mail log, I found entries like: [09/Oct/2012 18:02:20] Recv: Queue-ID: 5074589c-00004ddb, Service: SMTP, From: <info@XXXXXXXXXXXX>, To: <Suzette@xxxxxxxxxxx>, Size: 699, Sender-Host: mail.XXXXXXXXXXX, User: automailer@XXXXXXXXXXXXXXX I need to locate this specific email. The archive folder has a lot of ZIP files like: 2012-Oct-06 2012-Oct-07 2012-Oct-08 2012-Oct-09 ... I assumed this would be in the 2012-Oct-09 zip file. I extracted it, and the zip file contains a lot of emails in the /#msgs/ folder, named: 0000000a.eml 0000000b.eml 0000000c.eml ... I did a search for the last part of the Queue-ID, 00004ddb, but it returned no results. I tried other random searches for other emails in the mail log, but I couldn't find a single one. Any idea how one goes about finding such an email in the archive?

    Read the article

  • Can I choose a sparse file as vdev for a zfs pool?

    - by Karl Richter
    man zpool states that a vdev for a zfs pool can be a "regular file". Can I specify a sparse file (the warning about the integrity of the file being determined by the underlying filesystem should apply with the same relevance for a sparse file)? The ZFS administration guide on https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ states that file vdevs "must be preallocated, and not sparse files or thin provisioned" (thanks to @jlliagre). On https://wiki.archlinux.org/index.php/Experimenting_with_ZFS sparse files are used without any comment.

    Read the article

  • Why Matlab in screen in Linux on PuTTY terminates itself after closing PuTTY session?

    - by Karl
    I connected to a linux server with PuTTY and start a screen session, and start matlab with: matlab -nodesktop Then, I run my matlab code as usual. The code will run for hours. So to test whether screen works, I start another PuTTY session and run top. Then, I close PuTTY session with still-running Matlab (top shows Matlab at 100% CPU usage) in screen. To my surprise, my Matlab process vanished after I close the aforementioned session. I've tried this a few times, and it seems the same thing happened. screen -ls shows that my screens are there but detached. top also shows that my matlab is not there. What might be the possible cause of this? Doesn't screen normally should keep on running even I terminate my PuTTY session?

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • VMWare esxi 4.1 storage errors with MD3200

    - by Karl Katzke
    We're seeing some storage errors from the esxi logs relating to our MD3200. I'm sort of a VMWare noob and am not sure where to go from here because I couldn't find a lot of documentation on the VMWare website, and the forums didn't seem to have any posts about it with actual answers. Everything is working, but I'm trying to proactively troubleshoot this. sfcb-vmware_base|StoragePool Cannot get logical disk data from controller 0 sfcb-vmware_base|Volume Cannot get logical disk data from controller 0 sfcb-vmware_base|storelib-GetLDList-ProcessLibCommandCall failed; rval = 0x800E The ESXi boxes are connected directly via SAS to the controller on the MD3200. What do these errors actually mean, and what's a good path to start troubleshooting or solving them?

    Read the article

  • What is the public key file that is generated by PuTTY?

    - by Karl Nicoll
    If I'm using the PuTTY key generator to create a public/private key pair, there is a button to "Save public key" like so: However OpenSSH doesn't accept the format of this public key file, at least as far as I can tell. The generated public key looks like this: ---- BEGIN SSH2 PUBLIC KEY ---- Comment: "rsa-key-20140607" AAAAB3NzaC1yc2EAAAABJQAAAQEAs+UjC01Fk8xs8vpLW1RIipwxG1zXTaCkIdeJ K3SyhMVl78/QwErTYuIop3wVmVAuTKhw4uYCMaRZCy36FdSGQ9FwDCP+lT36M2Xv ZtraweH+1IPHzRf2ENNdEfs286zllu96WGtqLYwObXQbHMm3dPDDbH3apynrS/FJ HisCayFXFN84aBfh9HFHrM++BXqpxTX5nq50QoRwSjMY6qMuLwjJKKQslcb5hlRV SjCmUZKv9/fH+i0BI7UHJ01XHNp1sisL5biWkakXD9BxXjv/ggyeLsOTtdtrF0DK 7wYQXyNmpRqHYOBdrZlskHf/R1CtWoBi5IIeARWZVDduXf1Pww== ---- END SSH2 PUBLIC KEY ---- (Key is not an actual public key) Where is this key used typically? Does it work with OpenSSH at all?

    Read the article

  • How to use File History with Recovery partition?

    - by Karl
    I formatted the recovery partition right after installing Windows 8. I'm curious as to why File history only allow the use of external HDD. Instead of using the Recovery Partition. I can't find a way to use it. I decided to use it exclusively for Restore Points. Is there any way to make the Recovery Partition exclusively for the use of File History? Or should I use 3rd Party programs instead, (Easeus Todo Backup, Macrium Reflect, etc)?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >