Search Results

Search found 34305 results on 1373 pages for 'self referencing table'.

Page 84/1373 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Output columns not in destination table?

    - by lance
    SUMMARY: I need to use an OUTPUT clause on an INSERT statement to return columns which don't exist on the table into which I'm inserting. If I can avoid it, I don't want to add columns to the table to which I'm inserting. DETAILS: My FinishedDocument table has only one column. This is the table into which I'm inserting. FinishedDocument -- DocumentID My Document table has two columns. This is the table from which I need to return data. Document -- DocumentID -- Description The following inserts one row into FinishedDocument. Its OUTPUT clause returns the DocumentID which was inserted. This works, but it doesn't give me the Description of the inserted document. INSERT INTO FinishedDocument OUTPUT INSERTED.DocumentID SELECT DocumentID FROM Document WHERE DocumentID = @DocumentID I need to return from the Document table both the DocumentID and the Description of the matching document from the INSERT. What syntax do I need to pull this off? I'm thinking it's possible only with the one INSERT statement, by tweaking the OUTPUT clause (in a way I clearly don't understand)? Is there a smarter way that doesn't resemble the path I'm going down here? EDIT: SQL Server 2005

    Read the article

  • Python OpenGL Can't Redraw Scene

    - by RobbR
    I'm getting started with OpenGL and shaders using GLUT and PyOpenGL. I can draw a basic scene but for some reason I can't get it to update. E.g. any changes I make during idle(), display(), or reshape() are not reflected. Here are the methods: def display(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) glMatrixMode(GL_MODELVIEW) glLoadIdentity() glUseProgram(self.shader_program) self.m_vbo.bind() glEnableClientState( GL_VERTEX_ARRAY ) glVertexPointerf(self.m_vbo) glDrawArrays(GL_TRIANGLES, 0, len(self.m_vbo)) glutSwapBuffers() glutReportErrors() def idle(self): test_change += .1 self.m_vbo = vbo.VBO( array([ [ test_change, 1, 0 ], # triangle [ -1,-1, 0 ], [ 1,-1, 0 ], [ 2,-1, 0 ], # square [ 4,-1, 0 ], [ 4, 1, 0 ], [ 2,-1, 0 ], [ 4, 1, 0 ], [ 2, 1, 0 ], ],'f') ) glutPostRedisplay() def begin(self): glutInit() glutInitWindowSize(400, 400) glutCreateWindow("Simple OpenGL") glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) glutDisplayFunc(self.display) glutReshapeFunc(self.reshape) glutMouseFunc(self.mouse) glutMotionFunc(self.motion) glutIdleFunc(self.idle) self.define_shaders() glutMainLoop() I'd like to implement a time step in idle() but even basic changes to the vertices or tranlastions and rotations on the MODELVIEW matrix don't display. It just puts up the initial state and does not update. Am I missing something?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • How can a driver change the kernel page table?

    - by Naruto
    I am encountering an issue with kernel memory. When my driver finishes running, the other processes in kernel fail to run, for example, I run ls, the command crashes the kernel with error "Corrupted page table" at a specified address. I do not know whether the page table of my driver relates to the page table of other process. How can my driver changes the page table of the other processes? And how the driver of a process relates to the kernel page table? As I know when the driver runs, it will be switched to kernel context. Kernel has its own page table and the driver has it own one. What is the relation among the kernel page table, the page table of my driver and the page table of the other processes when it runs in kernel context?

    Read the article

  • WPF FlowDocument: force calculation of height etc. "off screen"

    - by Lars
    My target: a DocumentPaginator which takes a FlowDocument with a table, which splits the table to fit the pagesize and repeat the header/footer (special tagged TableRowGroups) on every page. For splitting the table I have to know the heights of its rows. While building the FlowDocument-table by code, the height/width of the TableRows are 0 (of course). If I assign this document to a FlowDocumentScrollViewer (PageSize is set), the heights etc. are calculated. Is this possible without using an UI-bound object? Instantiating a FlowDocumentScrollViewer which is not bound to a window doesn't force the pagination/calculation of the heights. This is how I determine the height of a TableRow (which works perfectly for documents shown by a FlowDocumentScrollViewer): FlowDocument doc = BuildNewDocument(); // what is the scrollviewer doing with the FlowDocument? FlowDocumentScrollViewer dv = new FlowDocumentScrollViewer(); dv.Document = doc; dv.Arrange(new Rect(0, 0, 0, 0)); TableRowGroup dataRows = null; foreach (Block b in doc.Blocks) { if (b is Table) { Table t = b as Table; foreach (TableRowGroup g in t.RowGroups) { if ((g.Tag is String) && ((String)g.Tag == "dataRows")) { dataRows = g; break; } } } if (dataRows != null) break; } if (dataRows != null) { foreach (TableRow r in dataRows.Rows) { double maxCellHeight = 0.0; foreach (TableCell c in r.Cells) { Rect start = c.ElementStart.GetCharacterRect(LogicalDirection.Forward); Rect end = c.ElementEnd.GetNextInsertionPosition(LogicalDirection.Backward).GetCharacterRect(LogicalDirection.Forward); double cellHeight = end.Bottom - start.Top; if (cellHeight > maxCellHeight) maxCellHeight = cellHeight; } System.Diagnostics.Trace.WriteLine("row " + dataRows.Rows.IndexOf(r) + " = " + maxCellHeight); } } Edit: I added the FlowDocumentScrollViewer to my example. The call of "Arrange" forces the FlowDocument to calculate its heights etc. I would like to know, what the FlowDocumentScrollViewer is doing with the FlowDocument, so I can do it without the UIElement. Is it possible?

    Read the article

  • Self-updating collection concurrency issues

    - by DEHAAS
    I am trying to build a self-updating collection. Each item in the collection has a position (x,y). When the position is changed, an event is fired, and the collection will relocate the item. Internally the collection is using a “jagged dictionary”. The outer dictionary uses the x-coordinate a key, while the nested dictionary uses the y-coordinate a key. The nested dictionary then has a list of items as value. The collection also maintains a dictionary to store the items position as stored in the nested dictionaries – item to stored location lookup. I am having some trouble making the collection thread safe, which I really need. Source code for the collection: public class PositionCollection<TItem, TCoordinate> : ICollection<TItem> where TItem : IPositionable<TCoordinate> where TCoordinate : struct, IConvertible { private readonly object itemsLock = new object(); private readonly Dictionary<TCoordinate, Dictionary<TCoordinate, List<TItem>>> items; private readonly Dictionary<TItem, Vector<TCoordinate>> storedPositionLookup; public PositionCollection() { this.items = new Dictionary<TCoordinate, Dictionary<TCoordinate, List<TItem>>>(); this.storedPositionLookup = new Dictionary<TItem, Vector<TCoordinate>>(); } public void Add(TItem item) { if (item.Position == null) { throw new ArgumentException("Item must have a valid position."); } lock (this.itemsLock) { if (!this.items.ContainsKey(item.Position.X)) { this.items.Add(item.Position.X, new Dictionary<TCoordinate, List<TItem>>()); } Dictionary<TCoordinate, List<TItem>> xRow = this.items[item.Position.X]; if (!xRow.ContainsKey(item.Position.Y)) { xRow.Add(item.Position.Y, new List<TItem>()); } xRow[item.Position.Y].Add(item); if (this.storedPositionLookup.ContainsKey(item)) { this.storedPositionLookup[item] = new Vector<TCoordinate>(item.Position); } else { this.storedPositionLookup.Add(item, new Vector<TCoordinate>(item.Position)); // Store a copy of the original position } item.Position.PropertyChanged += (object sender, PropertyChangedEventArgs eventArgs) => this.UpdatePosition(item, eventArgs.PropertyName); } } private void UpdatePosition(TItem item, string propertyName) { lock (this.itemsLock) { Vector<TCoordinate> storedPosition = this.storedPositionLookup[item]; this.RemoveAt(storedPosition, item); this.storedPositionLookup.Remove(item); } } } I have written a simple unit test to check for concurrency issues: [TestMethod] public void TestThreadedPositionChange() { PositionCollection<Crate, int> collection = new PositionCollection<Crate, int>(); Crate crate = new Crate(new Vector<int>(5, 5)); collection.Add(crate); Parallel.For(0, 100, new Action<int>((i) => crate.Position.X += 1)); Crate same = collection[105, 5].First(); Assert.AreEqual(crate, same); } The actual stored position varies every time I run the test. I appreciate any feedback you may have.

    Read the article

  • Javascript self contained sandbox events and client side stack

    - by amnon
    I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application . I've watched "scalable javascript application architecture" by Nicholas Zakas on yui theater (excellent video) and implemented much of the talk with good success but i have some questions : I found the lecture a little confusing in regards to the relationship between modules and sandboxes , on one had to my understanding modules should not be effected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core as they do access the core for hiding base libary) but each module in the application gets a new sandbox ? , shouldn't the sandbox limit events to the modoules using it ? or should events be published cross page ? e.g. : if i have two editable tables but i want to contain each one in a different sandbox and it's events effect only the modules inside that sandbox something like messabe box per table which is a different module/widget how can i do that with sandbox per module , ofcourse i can prefix the events with the moduleid but that creates coupling that i want to avoid ... and i don't want to package modules toghter as one module per combination as i already have 6-7 modules ? while i can hide the base library for small things like id selector etc.. i would still like to use the base library for module dependencies and resource loading and use something like yui loader or dojo.require so in fact i'm hiding the base library but the modules themself are defined and loaded by the base library ... seems a little strange to me libraries don't return simple js objects but usualy wrap them e.g. : u can do something like $$('.classname').each(.. which cleans the code alot , it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implemnting that functionality is very bug prone does anyonen have any experience with building a front side stack of this order ? how easy is it to change a base library and/or have modules from different libraries , using yui datatable but doing form validation with dojo ... ? some what of a combination of 2+4 if u choose to do something like i said and load dojo form validation widgets for inputs via yui loader would that mean dojocore is a module and the form module is dependant on it ? Thanks .

    Read the article

  • Comments Parent-Child query with indentation

    - by poldoj
    I've been trying to retrieve comments to articles in a pretty common blog fashion way. Here's my sample code: -- ---------------------------- -- Sample Table structure for [dbo].[Comments] -- ---------------------------- CREATE TABLE [dbo].[Comments] ( [CommentID] int NOT NULL , [AddedDate] datetime NOT NULL , [AddedBy] nvarchar(256) NOT NULL , [ArticleID] int NOT NULL , [Body] nvarchar(4000) NOT NULL , [parentCommentID] int NULL ) GO -- ---------------------------- -- Sample Records of Comments -- ---------------------------- INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'1', N'2011-11-26 23:18:07.000', N'user', N'1', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'2', N'2011-11-26 23:18:50.000', N'user', N'2', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'3', N'2011-11-26 23:19:09.000', N'user', N'1', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'4', N'2011-11-26 23:19:46.000', N'user', N'3', N'body', null); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'5', N'2011-11-26 23:20:16.000', N'user', N'1', N'body', N'1'); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'6', N'2011-11-26 23:20:42.000', N'user', N'1', N'body', N'1'); GO INSERT INTO [dbo].[Comments] ([CommentID], [AddedDate], [AddedBy], [ArticleID], [Body], [parentCommentID]) VALUES (N'7', N'2011-11-26 23:21:25.000', N'user', N'1', N'body', N'6'); GO -- ---------------------------- -- Indexes structure for table Comments -- ---------------------------- -- ---------------------------- -- Primary Key structure for table [dbo].[Comments] -- ---------------------------- ALTER TABLE [dbo].[Comments] ADD PRIMARY KEY ([CommentID]) GO -- ---------------------------- -- Foreign Key structure for table [dbo].[Comments] -- ---------------------------- ALTER TABLE [dbo].[Comments] ADD FOREIGN KEY ([parentCommentID]) REFERENCES [dbo]. [Comments] ([CommentID]) ON DELETE NO ACTION ON UPDATE NO ACTION GO I thought I could use a CTE query to do the job like this: WITH CommentsCTE(CommentID, AddedDate, AddedBy, ArticleID, Body, parentCommentID, lvl, sortcol) AS ( SELECT CommentID, AddedDate, AddedBy, ArticleID, Body, parentCommentID, 0, cast(CommentID as varbinary(max)) FROM Comments UNION ALL SELECT P.CommentID, P.AddedDate, P.AddedBy, P.ArticleID, P.Body, P.parentCommentID, PP.lvl+1, CAST(sortcol + CAST(P.CommentID AS BINARY(4)) AS VARBINARY(max)) FROM Comments AS P JOIN CommentsCTE AS PP ON P.parentCommentID = PP.CommentID ) SELECT REPLICATE('--', lvl) + right('>',lvl)+ AddedBy + ' :: ' + Body, CommentID, parentCommentID, lvl FROM CommentsCTE WHERE ArticleID = 1 order by sortcol go but the results have been very disappointing so far, and after days of tweaking I decided to ask for help. I was looking for a method to display hierarchical comments to articles like it happens in blogs. [edit] The problem with this query is that I get duplicates because I couldn't figure out how to properly select the ArticleID which I want comments from to display. I'm also looking for a method that sorts children entries by date within a same level. An example of what I'm trying to accomplish could be something like: (ArticleID[post retrieved]) ------------------------- ------------------------- (Comments[related to the article id above]) first comment[no parent] --[first child to first comment] --[second child to first comment] ----[first child to second child comment to first comment] --[third child to first comment] ----[first child to third child comment to first comment] ------[(recursive child): first child to first child to third child comment to first comment] ------[(recursive child): second child to first child to third child comment to first comment] second comment[no parent] third comment[no parent] --[first child to third comment] I kinda got myself lost in all this mess...I appreciate any help or simpler ways to get this working. Thanks

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Ubuntu + Wacom Intuos 4 + MyPaint HELP!

    - by Sativa
    Please keep in mind I'm not that computer savvy, but I will try any suggestion so please help me out! My tablet will stop working if the USB connection is ever broken, or the Ubuntu software is being updated. Sometimes it will stop working for no reason that I can see. The lights will still be on, but it won't be responsive. It doesn't work again until I restart the laptop with the tablet plugged in, which is grating if you have to do it every 25 min. or so... I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! Also, MyPaint is frequently having hiccups. It seems to save fine but at times it will randomly close down and when I open files they're often empty. They turn into 0Kb files and only contain a single empty layer. Also very grating, considering I lose days of work for no clear reason and without any heads up. Again, I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! The error message reads; Traceback (most recent call last): File "/usr/share/mypaint/gui/application.py", line 177, at_application_start(*junk=()) else: self.filehandler.open_file(fn) variables: {'fn': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora'), 'self.filehandler.open_file': ('local', <bound method FileHandler.wrapper of <gui.filehandling.FileHandler object at 0x7fdb89063a10>>)} File "/usr/share/mypaint/gui/drawwindow.py", line 60, wrapper(self=<gui.filehandling.FileHandler object>, *args=(u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',), **kwargs={}) try: func(self, *args, **kwargs) # gtk main loop may be called in here... variables: {'self': ('local', <gui.filehandling.FileHandler object at 0x7fdb89063a10>), 'args': ('local', (u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',)), 'func': ('local', <function open_file at 0x7fdb8b397b90>), 'kwargs': ('local', {})} File "/usr/share/mypaint/gui/filehandling.py", line 231, open_file(self=<gui.filehandling.FileHandler object>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora') try: self.doc.model.load(filename, feedback_cb=self.gtk_main_tick) except document.SaveLoadError, e: variables: {'self.doc.model.load': ('local', <bound method Document.load of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'feedback_cb': (None, []), 'self.gtk_main_tick': ('local', <function gtk_main_tick at 0x7fdb8b397b18>), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 544, load(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', **kwargs={'feedback_cb': <function gtk_main_tick>}) try: load(filename, **kwargs) except gobject.GError, e: variables: {'load': ('local', <bound method Document.load_ora of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'kwargs': ('local', {'feedback_cb': <function gtk_main_tick at 0x7fdb8b397b18>}), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 772, load_ora(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', feedback_cb=<function gtk_main_tick>) tempdir = tempdir.decode(sys.getfilesystemencoding()) z = zipfile.ZipFile(filename) print 'mimetype:', z.read('mimetype').strip() variables: {'zipfile.ZipFile': ('global', <class 'zipfile.ZipFile'>), 'z': (None, []), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/lib/python2.7/zipfile.py", line 770, __init__(self=<zipfile.ZipFile object>, file=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', mode='r', compression=0, allowZip64=False) if key == 'r': self._RealGetContents() elif key == 'w': variables: {'self._RealGetContents': ('local', <bound method ZipFile._RealGetContents of <zipfile.ZipFile object at 0x7fdb9b952790>>)} File "/usr/lib/python2.7/zipfile.py", line 811, _RealGetContents(self=<zipfile.ZipFile object>) if not endrec: raise BadZipfile, "File is not a zip file" if self.debug > 1: variables: {'BadZipfile': ('global', <class 'zipfile.BadZipfile'>)} BadZipfile: File is not a zip file

    Read the article

  • when i set property(retain), one "self" = one "retain"?

    - by Walter
    although I'm about to finish my first app, I'm still confused about the very basic memory management..I've read through many posts here and apple document, but I'm still puzzled.. For example..I'm currently doing things like this to add a label programmatically: @property (retain, nonatomic) UILabel *showTime; @sythesize showTime; showTime = [[UILabel alloc] initWithFrame:CGRectMake(45, 4, 200, 36)]; [self.showTime setText:[NSString stringWithFormat:@"%d", time]]; [self.showTime setFont:[UIFont fontWithName:@"HelveticaRoundedLT-Bold" size:23]]; [self.showTime setTextColor:numColor]; self.showTime.backgroundColor = [UIColor clearColor]; [self addSubview:self.showTime]; [showTime release]; this is my current practice, for UILabel, UIButton, UIImageView, etc... [Alloc init] it without self., coz I know this will retain twice.. but after the allocation, I put back the "self." to set the attributes.. My gut feel tells me I am doing wrong, but it works superficially and I found no memory leak in analyze and instruments.. can anyone give my advice? when I use "self." to set text and set background color, does it retain one automatically? THX so much!

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • Is there a difference between "self-plagiarizing" in programming vs doing so as a writer?

    - by makerofthings7
    I read this Gawker article about how a writer reused some of his older material for new assignments for different magazines. Is there any similar ethical (societal?) dilemma when doing the same thing in the realm of Programming? Does reusing a shared library you've accumulated over the years amount to self-plagarizm? What I'm getting at is that it seems that the creative world of software development isn't as stringent regarding self-plagarism as say journalism or blogging. In fact on one of my interviews at GS I was asked what kind of libraries I've developed over the years, implying that me getting the job would entail co-licensing helpful portions of code to that company. Are there any cases where although it's legal to self-plagarize, it would be frowned upon in the software world?

    Read the article

  • Why do Core Data sqlite table columns start with 'Z'?

    - by Dia
    I was looking at the sqlite table that Core Data generates and noticed that all table columns start with 'Z'. I realize this is an implementation detail, but I was curious as to why that's the case and if there was a design decision involved in this. Anyone happen to know or guess why? Here's a sample schema output of Core Data sqlite database: sqlite .schema CREATE TABLE ZPOST ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZPOSTID INTEGER, ZUSER INTEGER, ZCREATEDAT TIMESTAMP, ZTEXT VARCHAR ); CREATE TABLE ZUSER ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZUSERID INTEGER, ZAVATARIMAGEURLSTRING VARCHAR, ZUSERNAME VARCHAR ); CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB); CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER);

    Read the article

  • Internet Explorer percent based layout issue

    - by Tom
    Heya, My goal is to make a layout that is 200% width and height, with four containers of equal height and width (100% each), using no javascript as the bear minimum (or preferably no hacks). Right now I am using HTML5, and CSS display:table. It works fine in Safari 4, Firefox 3.5, and Chrome 5. I haven't tested it yet on older versions. Nonetheless, in IE7 and IE8 this layout fails completely. (I do use the Javascript HTML5 enabling script /cc../, so it should not be the use of new HTML5 tags) Here is what I have: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta charset="UTF-8" /> <title>IE issue with layout</title> <style type="text/css" media="all"> /* styles */ @import url("reset.css"); /* Generall CSS */ .table { display:table; } .row { display:table-row; } .cell { display:table-cell; } /* Specific CSS */ html, body { //overflow:hidden; I later intend to limit the viewport } section#body { position:absolute; width:200%; height:200%; overflow:hidden; } section#body .row { width:200%; height:50%; overflow:hidden; } section#body .row .cell { width:50%; overflow:hidden; } section#body .row .cell section { display:block; width:100%; height:100%; overflow:hidden; } section#body #stage0 section header { text-align:center; height:20%; display:block; } section#body #stage0 section footer { display:block; height:80%; } </style> </head> <body> <section id="body" class="table"> <section class="row"> <section id="stage0" class="cell"> <section> <header> <form> <input type="text" name="q" /> <input type="submit" value="Search" /> </form> </header> <footer> <table id="scrollers"> </table> </footer> </section> </section> <section id="stage1" class="cell"> <section> content </section> </section> </section> <section class="row"> <section id="stage2" class="cell"> <section> content </section> </section> <section id="stage3" class="cell"> <section> content </section> </section> </section> </section> </body> </html> You can see it live here: http://www.tombarrasso.com/ie-issue/

    Read the article

  • How to truncate a table with Spring JdbcTemplate?

    - by Marcus
    I'm trying to truncate a table with Spring: jdbcTemplate.execute("TRUNCATE TABLE " + table); Get the error: Caused by: org.springframework.jdbc.BadSqlGrammarException: StatementCallback; bad SQL grammar [TRUNCATE TABLE RESULT_ACCOUNT]; nested exception is java.sql.SQLException: Unexpected token: TRUNCATE in statement [TRUNCATE] Any ideas?

    Read the article

  • Option In The Html Agility Pack That Parse From The Tag `&lt table &lt`

    - by Harikrishna
    Is there any option in the html agility pack that can parse the tag which is like in the &lt and &gt. If there is tag like <table> then html agility pack parse the information from the tag table properly.But if the tag is like &lt table &lt then it does not parse the information from the tag table here. So any option is there in the html agility pack that parse information from such tags also.

    Read the article

  • ERROR 1005 (HY000): Can't create table 'tmp' (errno: 13)

    - by kobey
    Hi, I'm Running Mysql on ubuntu 9.10, the process of Mysql is running as root, I'm using root account when logging to Mysql, which I gave all privileges, I'm using my own db(not mysql), I can create a table, but when i try to create Temporary table i get this error: ERROR 1005 (HY000): Can't create table 'tmp' (errno: 13) For this query: CREATE TEMPORARY TABLE tmp (id int); I've plenty of space in my hard drive, all permissions are granted(also var/lib/mysql have mysql permissions). Any idea? Thanks, Koby

    Read the article

  • UITableView reloadData to sort the table cells

    - by harekam_taj
    Hey Guys, I have a uitableview and I am populating the tableview with data from the internet. I added some sort features to the table but when I do the sorting and get new results from the web server and reload the table. The table doesn't refresh, the old results stay on the table and if I press on a particular cell I see the new result just for that cell. Can someone please help me? Thanks

    Read the article

  • Undo table updates in SQL Server 2008

    - by sikas
    I updated a table in my MS SQL Server 2008 by accident, I was updating a table from another by copying cell by cell, but I have overwritten the original table. Is there a way that I can restore my table contents as it was?

    Read the article

  • Check if access table exists

    - by HasanGursoy
    I want to log web site visits' IP, datetime, client and refferer data to access database but I'm planning to log every days log data in separate tables in example logs for 06.06.2010 will be logged in 2010_06_06 named table. When date is changed I'll create a table named 2010_06_07. But the problem is if this table is already created. Any suggestions how to check if table exists in Access?

    Read the article

  • Converting a Table in Oracle Database into an XML file

    - by Geethapriya.VC
    Hi, I need to create XML file, given a table/view in the Database. The actual requirement would be: The code has to get the table/view name as input parameter, read the schema of the same, create a corresponding XML file with the same structure as that of the table/view, and load the XML with the data of the table/view. Language preferred here is JSP. Kindly let me know how to go about this idea. Thanks in advance, Geetha

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >