Search Results

Search found 16061 results on 643 pages for 'array indexing'.

Page 222/643 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Drag N Drop utilizing simple cursor

    - by Cameron
    I'm using CommonsGuy's drag n drop example and I am basically trying to integrate it with the Android notepad example. Drag N Drop Out of the 2 different drag n drop examples i've seen they have all used a static string array where as i'm getting a list from a database and using simple cursor adapter. So my question is how to get the results from simple cursor adapter into a string array, but still have it return the row id when the list item is clicked so I can pass it to the new activity that edits the note. Here is my code: Cursor notesCursor = mDbHelper.fetchAllNotes(); startManagingCursor(notesCursor); // Create an array to specify the fields we want to display in the list (only NAME) String[] from = new String[]{WeightsDatabase.KEY_NAME}; // and an array of the fields we want to bind those fields to (in this case just text1) int[] to = new int[]{R.id.weightrows}; // Now create a simple cursor adapter and set it to display SimpleCursorAdapter notes = new SimpleCursorAdapter(this, R.layout.weights_row, notesCursor, from, to); setListAdapter(notes); And here is the code i'm trying to work that into. public class TouchListViewDemo extends ListActivity { private static String[] items={"lorem", "ipsum", "dolor", "sit", "amet", "consectetuer", "adipiscing", "elit", "morbi", "vel", "ligula", "vitae", "arcu", "aliquet", "mollis", "etiam", "vel", "erat", "placerat", "ante", "porttitor", "sodales", "pellentesque", "augue", "purus"}; private IconicAdapter adapter=null; private ArrayList<String> array=new ArrayList<String>(Arrays.asList(items)); @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); adapter=new IconicAdapter(); setListAdapter(adapter); TouchListView tlv=(TouchListView)getListView(); tlv.setDropListener(onDrop); tlv.setRemoveListener(onRemove); } private TouchListView.DropListener onDrop=new TouchListView.DropListener() { @Override public void drop(int from, int to) { String item=adapter.getItem(from); adapter.remove(item); adapter.insert(item, to); } }; private TouchListView.RemoveListener onRemove=new TouchListView.RemoveListener() { @Override public void remove(int which) { adapter.remove(adapter.getItem(which)); } }; class IconicAdapter extends ArrayAdapter<String> { IconicAdapter() { super(TouchListViewDemo.this, R.layout.row2, array); } public View getView(int position, View convertView, ViewGroup parent) { View row=convertView; if (row==null) { LayoutInflater inflater=getLayoutInflater(); row=inflater.inflate(R.layout.row2, parent, false); } TextView label=(TextView)row.findViewById(R.id.label); label.setText(array.get(position)); return(row); } } } I know i'm asking for a lot, but a point in the right direction would help quite a bit! Thanks

    Read the article

  • OpenGL ES Polygon with Normals rendering (Note the 'ES!')

    - by MarqueIV
    Ok... imagine I have a relatively simple solid that has six distinct normals but actually has close to 48 faces (8 faces per direction) and there are a LOT of shared vertices between faces. What's the most efficient way to render that in OpenGL? I know I can place the vertices in an array, then use an index array to render them, but I have to keep breaking my rendering steps down to change the normals (i.e. set normal 1... render 8 faces... set normal 2... render 8 faces, etc.) Because of that I have to maintain an array of index arrays... one for each normal! Not good! The other way I can do it is to use separate normal and vertex arrays (or even interleave them) but that means I need to have a one-to-one ratio for normals to vertices and that means the normals would be duplicated 8 times more than they need to be! On something with a spherical or even curved surface, every normal most likely is different, but for this, it really seems like a waste of memory. In a perfect world I'd like to have my vertex and normal arrays have different lengths, then when I go to draw my triangles or quads To specify the index to each array for that vertex. Now the OBJ file format lets you specify exactly that... a vertex array and a normal array of different lengths, then when you specify the face you are rendering, you specify a vertex and a normal index (as well as a UV coord if you are using textures too) which seems like the perfect solution! 48 vertices but only 8 normals, then pairs of indexes defining the shapes' faces. But I'm not sure how to render that in OpenGL ES (again, note the 'ES'.) Currently I have to 'denormalize' (sorry for the SQL pun there) the normals back to a 1-to-1 with the vertex array, then render. Just wastes memory to me. Anyone help? I hope I'm missing something very simple here. Mark

    Read the article

  • How to find an specific key/value (property list)

    - by Bob Rivers
    Hi, I'm learning cocoa/objective-c. Right now I'm dealing with key/value coding. After reading Aaron's book and other sources, I thought that I was able to left the simple examples and try a complex one... I'm trying read iTunes property list (iTunes Music Library.xml). I would like to retrieve the tracks held by an specific playlist. Probably everybody knows it, but bellow I put a piece of the xml: <plist version="1.0"> <dict> <key>Major Version</key><integer>1</integer> ... <key>Playlists</key> <array> <dict> <key>Name</key><string>Library</string> ... <key>Playlist Items</key> <array> <dict> <key>Track ID</key><integer>10281</integer> </dict> ... </array> </dict> <dict> ... </dict> </array> </dict> </plist> As you can see, the playlists are stored as dictionaries inside an array, and the key that identifies it is inside it, not as a <key> preceding it. The problem is that I'm not able to figure out how to search for a key that is inside another one. With the following code I can find the the array in which the playlists are stored, but how to find an specific <dict>? NSDictionary *rootDict = [[NSDictionary alloc] initWithContentsOfFile:file]; NSArray *playlists = [rootDict objectForKey:@"Playlists"]; Here at Stackoverflow I found this post, but I'm not sure if iterate over the array and test it is a good idea. I'm quite sure that I could use valueForKeyPath, but I'm unable to figure out how to do it. Any help is welcome. TIA, Bob

    Read the article

  • making a queue program

    - by seventhief
    Hi can someone help me making a queue program. i want to set the array[0] to be array[1] just in display but in real i am adding value at array[0]. i got how to run the add function to it. but i can't do the view and delete command that will view from ex. array[0] to array[4], when displayed array[1] to array[5] with the value inserted. #include <stdio.h> #include <stdlib.h> #define p printf #define s scanf int rear = 0; int front = 0; int *q_array = NULL; int size = 0; main() { int num, opt; char cont[] = { 'y' }; clrscr(); p("Queue Program\n\n"); p("Queue size: "); s("%d", &size); p("\n"); if(size > 0) { q_array = malloc(size * sizeof(int)); if(q_array == NULL) { p("ERROR: malloc() failed\n"); exit(2); } } else { p("ERROR: size should be positive integer\n"); exit(1); } while((cont[0] == 'y') || (cont[0] == 'Y')) { clrscr(); p("Queue Program"); p("\n\nQueue size: %d\n\n", size); p("MAIN MENU\n1. Add\n2. Delete\n3. View"); p("\n\nYour choice: "); s("%d", &opt); p("\n"); switch(opt) { case 1: if(rear==size) { p("You can't add more data"); } else { p("Enter data for Queue[%d]: ", rear+1); s("%d", &num); add(num); } break; case 2: delt(); break; case 3: view(); break; } p("\n\nDo you want to continue? (Y\/N)"); s("%s", &cont[0]); } } add(int a) { q_array[rear]=a; rear++; } delt() { if(front==rear) { p("Queue Empty"); } else { p("Queue[%d] = %d removed.", front, q_array[front]); front++; } } view() { int i; for(i=front;i<=rear;i++) p("\nQueue[%d] = %d", i, q_array[i]); }

    Read the article

  • Getting specific data from database

    - by ifsession
    I have a table called Categorie with a few columns and I'm trying to get only a few out of my database. So I've tried this: $sql = 'SELECT uppercat AS id, COUNT(uppercat) AS uppercat FROM categorie GROUP BY uppercat;'; $d = Yii::app()->db->createCommand($sql)->query(); But I find the output strange. I was trying to do an array_shift but I get an error that this isn't an array. When I do a var_dump on $d: object(CDbDataReader)[38] private '_statement' => object(PDOStatement)[37] public 'queryString' => string 'SELECT uppercat AS id, COUNT(uppercat) AS uppercat FROM categorie GROUP BY uppercat;' (length=100) private '_closed' => boolean false private '_row' => null private '_index' => int -1 private '_e' (CComponent) => null private '_m' (CComponent) => null Ok.. then I did a foreach on $d: array 'id' => string '0' (length=1) 'uppercat' => string '6' (length=1) array 'id' => string '3' (length=1) 'uppercat' => string '2' (length=1) array 'id' => string '6' (length=1) 'uppercat' => string '1' (length=1) array 'id' => string '7' (length=1) 'uppercat' => string '2' (length=1) array 'id' => string '9' (length=1) 'uppercat' => string '2' (length=1) Then why do I get the message that $d isn't an array while it contains arrays? Is there any other way on how to get some specific data out of my database and that I can then do an array_shift on them? I've also tried doing this with findAllBySql but then I can't reach my attribute for COUNT(uppercat) which is not in my model. I guess I'd have to add it to my model but I wouldn't like that because I need it just once.

    Read the article

  • "Error in the Site Data Web Service." when performing crawl

    - by Janis Veinbergs
    Installed SharePoint Services v3 (SP2, october 2009 cumulative updates, Language Pack), attached to a content database I had previously (all works). Installed Search server 2008 Express (with language pack) on top of WSS and crawl does not work. However it works for newly created web application + database. Was playing around with accounts, permissions to try get it working. Currently I have WSS_Crawler account with such permissions: Office Search Server runs with WSS_Crawler account Config database has read permissions for WSS_Crawler Content database has read permissions for WSS_Crawler WSS_Crawler is owner of search database. Added WSS_Crawler to SQL server browser user group and administrator Yes, i'v given more permissions than needed, but it doesn't even work with that and i don't know if its permission problem or what. Crawl log says there is Error in the Site Data Web Service., nothing more. There were known issues with a similar error: Error in the Site Data Web Service. (Value does not fall within the expected range.), but this is not the case as thats an old issue and i hope it has been included in SP2... Logs are from olders to newest (descending order). They don't appear to be very helpful. Crawl log http://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris/contentdbid={55180cfa-9d2d-46e4... Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris/test Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM EventLog No errors in EventLog, just some Information events that Office Server Search provides The search service started. Successfully stored the application configuration registry snapshot in the database. Context: Application 'SharedServices Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: Portal_Content. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog Portal_Content. Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: AnchorProject. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog AnchorProject. ULS Log Just some information, but no exceptions, unexpected errors 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Insert crawl 771 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Request Start Crawl 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Advise status change 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:28.28 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 8wn6 Information A full crawl was started on 'Local Office SharePoint Server sites' by BALTICOVO\janis.veinbergs. 03/15/2010 09:03:28.43 mssdmn.exe (0x1750) 0x10F8 ULS Logging Unified Logging Service 8wsv High ULS Init Completed (mssdmn.exe, Microsoft.Office.Server.Native.dll) 03/15/2010 09:03:30.48 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0v Medium Create CCache 03/15/2010 09:03:30.56 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0z Medium Create CUserCatalogCache 03/15/2010 09:03:32.06 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 90ge Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 03/15/2010 09:03:32.09 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 7phq High GetProtocolConfigHelper failed in GetNotesInterface(). 03/15/2010 09:03:34.26 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:35.92 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.32 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.23 mssdmn.exe (0x1750) 0x1850 Search Server Common MS Search Indexing 8z14 Medium Test TRACE (NULL):(null), (NULL)(null), (CrLf): , end 03/15/2010 09:03:39.04 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x0B24 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=test/siteid={390611b2-55f3-4a99-8600-778727177a28}/weburl=/webid={fb0e4bff-65d5-4ded-98d5-fd099456962b}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x1260 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris/test 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=/siteid={505443fa-ef12-4f1e-a04b-d5450c939b78}/weburl=/webid={c5a4f8aa-9561-4527-9e1a-b3c23200f11c}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 24, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Remove crawl 771 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Insert crawl 772 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project Portal_Content - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:44.82 mssearch.exe (0x1B2C) 0x1DD0 Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:44.95 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.51 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.39 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.01 mssearch.exe (0x1B2C) 0x1C6C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.87 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.29 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.53 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.67 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.82 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.84 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.90 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.00 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Remove crawl 772 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Insert crawl 773 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Removed start crawl request from Queue 0, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2942 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:56.71 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:56.78 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.40 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Remove crawl 773 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 What could be wrong here - any clues?

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • Resolving the Access is Denied Error in VSeWSS Deployments

    - by Damon
    Visual Studio Extensions for Windows SharePoint Services 1.3 (VSeWSS 1.3) tends to make my life easier unless I'm typing out the words that make up the VSeWSS acronym - really, what a mouthful.  But one of the problems that I routinely encounter are error messages when trying to deploy solutions.  These normally look something like the following: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) I tried a variety of steps to resolve this issue: Recycling the application pool Restarting IIS Closing Visual Studio Not detaching from the debugger until a request was fully completed Logging off and logging back into Windows etc. Nothing actually worked.  Some of these resolution attempts seemed to help keep the problem from happening quite as frequently, but I still have no idea what EXACTLY causes the problem and it would rear its ugly head from time to time.  Unfortunately, the only resolution I found that seemed to work was to reboot the machine . which is a crappy resolution. Finally sick enough of the problem to spend some time on it, I went on a search and tried to figure out if anyone else was having this issue.  People seem to suggest that turning off the Indexing Service on your machine helps resolve this problem.  I tried turning it off but I kept having issues.  Which was depressing.  Fortunately, I stumbled upon the resolution when I was looking through the services list.  If you encounter the issue, all you have to do is reset the World Wide Web Publishing Service.  I've had a 100% success rate so far with this approach.  I'm not sure if having the Indexing Service is part of the solution, but I've kept it disabled for the time being because I'm really sick of having to reboot my machine to deal with that error message. If you do VSeWSS development, you may also want to check out this blog post: VSeWSS 1.3 - Getting around the "Unable to load one or more of the requested types" Error

    Read the article

  • JBOD with PERC H810

    - by primero
    I'm wondering if anybody has ever used Dell storage products like the MD3220 array in a JBOD configuration. From what I can tell only perc h810 will work for external JBOD but that is not terribly specific, and for some reason I couldn't find many examples on the web of people configuring dell storage products as JBOD. My question is: Is it possible to connect to am MD3220 array, or other Dell arrays using a PERC h810 controller and use it as JBOD, and if so do I have to configure every disk in the array as a RAID 0 volume?

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • SQL SERVER – Cleaning Up SQL Server Indexes – Defragmentation, Fillfactor – Video

    - by pinaldave
    Storing data non-contiguously on disk is known as fragmentation. Before learning to eliminate fragmentation, you should have a clear understanding of the types of fragmentation. When records are stored non-contiguously inside the page, then it is called internal fragmentation. When on disk, the physical storage of pages and extents is not contiguous. We can get both types of fragmentation using the DMV: sys.dm_db_index_physical_stats. Here is the generic advice for reducing the fragmentation. If avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEX REORGANIZE: This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level pages of the index in a logical order. As this is an online operation, the index is available while the statement is running. If avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD: This is replacement for DBCC DBREINDEX to rebuild the index online or offline. In such case, we can also use the drop and re-create index method.(Ref: MSDN) Here is quick video which covers many of the above mentioned topics. While Vinod and I were planning about Indexing course, we had plenty of fun and learning. We often recording few of our statement and just left it aside. Afterwords we thought it will be really funny Here is funny video shot by Vinod and Myself on the same subject: Here is the link to the SQL Server Performance:  Indexing Basics. Here is the additional reading material on the same subject: SQL SERVER – Fragmentation – Detect Fragmentation and Eliminate Fragmentation SQL SERVER – 2005 – Display Fragmentation Information of Data and Indexes of Database Table SQL SERVER – De-fragmentation of Database at Operating System to Improve Performance Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How is SU indexed so fast on Google?

    - by ekaj
    I just did a quick Google for a question that was 20 minutes old, to look for an answer, and it was already on Google Search - how is this possible? I glanced over this article which seems to suggest that SU has added RSS feeds (which SU has, but when I opened the feed the article says last posted 6 minutes ago, but when Googled it is 11 hours old) - which leads me to think (Based on that article, I don't know much about search indexing but I am reading at the moment) that most of this indexing is done thanks to the sitemap - is there anything else I am unaware of that helps SU questions get on Google so fast?

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • SEO - Index images (lazyload)

    - by Guilherme Nascimento
    Note:My question is not about Javascript. I'm developing a plugin for jQuery/Mootols/Prototype, that work with DOM. This plugin will be to improve page performance (better user experience). The plugin will be distributed to other developers so that they can use in their projects. How does the lazyload: The images are only loaded when you scroll down the page (will look like this: http://www.appelsiini.net/projects/lazyload/enabled_timeout.html LazyLoad). But he does not need HTML5, I refer to this attribute: data-src="image.jpg" Two good examples of website use LazyLoad are: youtube.com (suggested videos) and facebook.com (photo gallery). I believe that the best alternative would be to use: <A href="image.jpg">Content for ALT=""</a> and convert using javascript, for this: <IMG alt="Content for ALT=\"\"" src="image.jpg"> Then you question me: Why do you want to do that anyway? I'll tell you: Because HTML5 is not supported by any browser (especially mobile) And the attribute data-src="image.jpg" not work at all Indexers. I need a piece of HTML code to be fully accessible to search engines. Otherwise the plugin will not be something good for other developers. I thought about doing so to help in indexing: <noscript><img src="teste.jpg"></noscript> But noscript has negative effect on the index (I refer to the contents of noscript) I want a plugin that will not obstruct the image indexing in search engines. This plugin will be used by other developers (and me too). This is my question: How to make a HTML images accessible to search engines, which can minimize the requests?

    Read the article

  • What is the best practice for when to check if something needs to be done?

    - by changokun
    Let's say I have a function that does x. I pass it a variable, and if the variable is not null, it does some action. And I have an array of variables and I'm going to run this function on each one. Inside the function, it seems like a good practice is to check if the argument is null before proceeding. A null argument is not an error, it just causes an early return. I could loop through the array and pass each value to the function, and the function will work great. Is there any value to checking if the var is null and only calling the function if it is not null during the loop? This doubles up on the checking for null, but: Is there any gained value? Is there any gain on not calling a function? Any readability gain on the loop in the parent code? For the sake of my question, let's assume that checking for null will always be the case. I can see how checking for some object property might change over time, which makes the first check a bad idea. Pseudo code example: for(thing in array) { x(thing) } Versus: for(thing in array) { if(thing not null) x(thing) } If there are language-specific concerns, I'm a web developer working in PHP and JavaScript.

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • 12.04 grub unable to boot on /sde, upgrade-grub and boot-repair failed, please help

    - by VGR
    My problem is I've 4 disks in a raid array listed as sda, sdb... sdd and grub 2 refuses to boot on /sde (the 5th disk, standalone and containing a clean install of 12.04 64 bits). I tried all solutions but all fail. (live CD/USB with grub-setup, also tried repair-grub, and tried also in the "grub rescue" set prefix= etc). I also tried to deactivate the RAID array in the BIOS, but I'd rather not destroy it, and I didn't find a way to make the standalone disk as '/sda1' (this would satisfy grub). In the BIOS, the would-be /sda is the only bootable hard disk; it ends up as /sde and grubs complains. I've made repair-grub issue a pastebin. I always end up in grub-rescue and I'm stuck. I need Ubuntu to boot so that I can add the device array handler for my disks. I can't switch the disks and I can't disconnect the SATA RAID controller. I need: (a) a workaround so that grub starts on /sde; or (b) a way to change the order in which Ubuntu sees the disks, at boot time. I could then provide grub with a /sda1. Thanks a lot. up please thanks a lot it's not the same problem as booting ubuntu from raid. My RAID array serves only of data repository windows had no problem with this configuration

    Read the article

  • HTTPS To http redirect issue. How to overcome?

    - by Akshay
    Have already seen suggests on how to rewrite https to http. currently using this technique : RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^(.*)$ http://www.mysite.com/$1 [L,R=301] RewriteRule ^(.*)$ http ://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] Problem : I am currently on hostgator VPS and have found Google indexing my HTTPS pages. Weird for me as never bought an SSL. My site is a blog only. When spoke to Google forums, ( https://productforums.google.com/forum/#!msg/webmasters/2Hz46t44nwk/7voZWudFtAQJ ), they say I should redirect https to http. Now when I have redirected this using the above method, I am still getting SSL warning in browsers. And found that Google is still indexing my new pages with https. I feel as I do not have an SSL, adding a redirect in https doesn't work. So if Google indexes my https page, then I should go and buy SSL, and tell there to redirect https to http. Why would I do that? Please help me, reduced the traffic by nearly 30% because of this. Have even told search engines to go to this file (disallows everything) if they are on https. Options +FollowSymlinks RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^robots.txt$ robots_ssl.txt

    Read the article

  • Cannot create zpool, how to get rid of intel raid volume?

    - by nagylzs
    This is a FreeBSD 9.1 amd64 computer. It has 5 disks installed. The ada0 and ada1 disks are used with a hw raid to provide the root filesystem: root@gw:/home/gandalf # ls /dev | grep ada ada0 ada1 ada2 ada3 ada4 root@gw:/home/gandalf # zpool status pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raid/r0s1a ONLINE 0 0 0 errors: No known data errors I want to create a raidz pool for the remaining disks: root@gw:/home/gandalf # zpool create -f data raidz1 ada2 ada3 ada4 cannot create 'data': one or more devices is currently unavailable root@gw:/home/gandalf # dmesg | grep ada2 ada2 at ata4 bus 0 scbus6 target 0 lun 0 ada2: <WDC WD20EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad16 root@gw:/home/gandalf # dmesg | grep ada3 ada3 at ata5 bus 0 scbus7 target 0 lun 0 ada3: <SAMSUNG HD103UJ 1AA01118> ATA-7 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada3: 953868MB (1953523055 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad18 GEOM_RAID: Intel-fb8732fa: Disk ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Subdisk Volume0:0-ada3 state changed from NONE to ACTIVE. root@gw:/home/gandalf # dmesg | grep ada4 ada4 at ata6 bus 0 scbus8 target 0 lun 0 ada4: <TOSHIBA DT01ACA100 MS2OA750> ATA-8 SATA 3.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada4: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada4: Previously was known as ad20 root@gw:/home/gandalf # dmesg | grep GEOM_RAID Aha, so ada3 is already part of another raid volume? Let's see: root@gw:/home/gandalf # dmesg | grep GEOM_RAID GEOM_RAID: SiI-130628113902: Array SiI-130628113902 created. GEOM_RAID: SiI-130628113902: Disk ada0 state changed from NONE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 state changed from NONE to STALE. GEOM_RAID: SiI-130628113902: Disk ada1 state changed from NONE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:0-ada1 state changed from NONE to STALE. GEOM_RAID: SiI-130628113902: Array started. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:0-ada1 state changed from STALE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 state changed from STALE to RESYNC. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 rebuild start at 0. GEOM_RAID: SiI-130628113902: Volume SiI Raid1 Set state changed from STARTING to SUBOPTIMAL. GEOM_RAID: SiI-130628113902: Provider raid/r0 for volume SiI Raid1 Set created. GEOM_RAID: Intel-fb8732fa: Array Intel-fb8732fa created. GEOM_RAID: Intel-fb8732fa: Force array start due to timeout. GEOM_RAID: Intel-fb8732fa: Disk ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Subdisk Volume0:0-ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Array started. GEOM_RAID: Intel-fb8732fa: Volume Volume0 state changed from STARTING to DEGRADED. GEOM_RAID: Intel-fb8732fa: Provider raid/r1 for volume Volume0 created. root@gw:/home/gandalf # Yes, indeed. I want to get rid of raid/r1 completely. However, the controller was already set to "IDE" mode in the BIOS. So why it is creating a raid volume??? I have also tried overwritting the first 16k data of ada3 and reboot the computer, but it did not help. How can I delete /dev/raid/r1 ? root@gw:/home/gandalf # graid status Name Status Components raid/r0 SUBOPTIMAL ada0 (ACTIVE (RESYNC 4%)) ada1 (ACTIVE (ACTIVE)) raid/r1 DEGRADED ada3 (ACTIVE (ACTIVE)) root@gw:/home/gandalf # graid delete raid/r1 graid: Array 'raid/r1' not found. root@gw:/home/gandalf # graid delete /dev/raid/r1 graid: Array '/dev/raid/r1' not found. root@gw:/home/gandalf # Thanks

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Efficient Trie implementation for unicode strings

    - by U Mad
    I have been looking for an efficient String trie implementation. Mostly I have found code like this: Referential implementation in Java (per wikipedia) I dislike these implementations for mostly two reasons: They support only 256 ASCII characters. I need to cover things like cyrillic. They are extremely memory inefficient. Each node contains an array of 256 references, which is 4096 bytes on a 64 bit machine in Java. Each of these nodes can have up to 256 subnodes with 4096 bytes of references each. So a full Trie for every ASCII 2 character string would require a bit over 1MB. Three character strings? 256MB just for arrays in nodes. And so on. Of course I don't intend to have all of 16 million three character strings in my Trie, so a lot of space is just wasted. Most of these arrays are just null references as their capacity far exceeds the actual number of inserted keys. And if I add unicode, the arrays get even larger (char has 64k values instead of 256 in Java). Is there any hope of making an efficient trie for strings? I have considered a couple of improvements over these types of implementations: Instead of using array of references, I could use an array of primitive integer type, which indexes into an array of references to nodes whose size is close to the number of actual nodes. I could break strings into 4 bit parts which would allow for node arrays of size 16 at the cost of a deeper tree.

    Read the article

  • Implications on automatically "open" third party domain aliasing to one of my subdomains

    - by Giovanni
    I have a domain, let's call it www.mydomain.com where I have a portal with an active community of users. In this portal users cooperate in a wiki way to build some "kind of software". These software applications can then be run by accessing "public.mydomain.com/softwarename" I then want to let my users run these applications from their own subdomains. I know I can do that by automatically modifying the.htaccess file. This is not a problem. I want to let these users create dns aliases to let them access one specific subdomain. So if a user "pippo" that owns "www.pippo.com" wants to run software HelloWorld from his own subdomains he has to: Register to my site Create his own subdomain on his own site, run.pippo.com From his DNS control panel, he creates a CNAME record "run.pippo.com" pointing to "public.mydomain.com" He types in a browser http://run.pippo.com/HelloWorld When the software(that is physically run on my server) is called, first it checks that the originating domain is a trusted one. I don't do any other kind of check that restricts software execution. From a SEO perspective, I care about Google indexing of www.mydomain.com but I don't care about indexing of public.mydomain.com What are the possible security implications of doing this for my site? Is there a better way to do this or software that already does this that I can use?

    Read the article

  • How to get Google Desktop to index searchable pdf's

    - by user4941
    I've got a scanner that converts documents to pdfs. The pdf that gets produced is searchable. However when google desktop indexes the file it doesn't appear to be indexing any of the contents of the pdf (although it is indexing the content of other pdfs on the computer). I believe Google Desktop only indexes pdfs that are searchable and don't have images. Have others found a good way around this problem? I'm trying to get my household and office paperless, and I'd like to just scan in documents and rely on Google Desktop to find stuff.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >