Search Results

Search found 18786 results on 752 pages for 'document storage'.

Page 25/752 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Dynamic Document Template

    - by ell
    I would like to create a "C++ Class" document template, I know you can make static ones by putting them into ~/Templates, but I would like to be able for the content to change according to the file name on creation, for example, using a template like such (pseudocode): #ifndef $(filename)_HPP_INCLUDED #define $(filename)_HPP_INCLUDED class $(filename) { public: } #endif $(filename)_HPP_INCLUDED Is this possible? If so, how can I do it? Thanks in advance, ell.

    Read the article

  • Vattenfall Accelerates Projects and Cuts Costs with AutoVue Document Visualization

    Ringhals, a Swedish nuclear power plant, part of the Vattenfall Group, produces 20 percent of the country's electricity and is the largest power station in the Nordic region. Ringhals has standardized on AutoVue for most of their engineering and asset document visualization requirements throughout their plant maintenance, design and engineering operations. As a result, they have cut IT maintenance costs, increased productivity, and improved maintenance operations.

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • jQuery selector for document and another element

    - by James
    Can I merge these in jQuery? I'm using the HotKeys plugin. $(document).bind('keydown', 'n', cycleNoise); $(document).bind('keydown', 's', pickRandom); $(document).bind('keydown', 'v', toggleTasks); $(document).bind('keydown', 't', toggleTimer); $(document).bind('keydown', 'up', hideTask); $(document).bind('keydown', 'down', nextTask); $('#duration').bind('keydown', 't', toggleTimer); $('#duration').bind('keydown', 'n', cycleNoise); $('#duration').bind('keydown', 's', pickRandom); $('#duration').bind('keydown', 'v', toggleTasks); $('#duration').bind('keydown', 'up', hideTask); $('#duration').bind('keydown', 'down', nextTask); In other words, is it possible to use document and '#duration' in the same selector. The following doesn't seem to work: $(document + ',#duration').bind(...);

    Read the article

  • Createing a Document Workspaces for Workflow in Sharepoint

    - by Tots
    I need to create a workflow that routes a document to a reviewer and upon approval creates a document workspace in SharePoint 2007. Setting up the workflow for the the approval of the document is simple, but I'm unsure of how to create the document workspace upon approval. I want the document workspace to be created programmatically once the document is approved. Can someone recommend an approach for accomplishing this?

    Read the article

  • Are document-oriented databases meant to replace relational databases?

    - by evolve
    Recently I've been working a little with MongoDB and I have to say I really like it. However it is a completely different type of database then I am used. I've noticed that it is most definitely better for certain types of data, however for heavily normalized databases it might not be the best choice. It appears to me however that it can completely take the place of just about any relational database you may have and in most cases perform better, which is mind boggling. This leads me to ask a few questions: Are document-oriented databases have been developed to be the next generation of databases and basically replace relational databases completely? Is it possible that projects would be better off using both a document-oriented database and a relational database side by side for various data which is better suited for one or the other? If document-oriented databases are not meant to replace relational databases, then does anyone have an example of a database structure which would absolutely be better off in a relational database (or vice-versa)?

    Read the article

  • Isolated storage misunderstand

    - by Costa
    Hi this is a discussion between me and me to understand isolated storage issue. can you help me to convince me about isolated storage!! This is a code written in windows form app (reader) that read the isolated storage of another win form app (writer) which is signed. where is the security if the reader can read the writer's file, I thought only signed code can access the file! If all .Net applications born equal and have all permissions to access Isolated storage, where is the security then? If I can install and run Exe from isolated storage, why I don't install a virus and run it, I am trusted to access this area. but the virus or what ever will not be trusted to access the rest of file system, it only can access the memory, and this is dangerous enough. I cannot see any difference between using app data folder to save the state and using isolated storage except a long nasty path!! I want to try give low trust to Reader code and retest, but they said "Isolated storage is actually created for giving low trusted application the right to save its state". Reader code: private void button1_Click(object sender, EventArgs e) { String path = @"C:\Documents and Settings\All Users\Application Data\IsolatedStorage\efv5cmbz.ewt\2ehuny0c.qvv\StrongName.5v3airc2lkv0onfrhsm2h3uiio35oarw\AssemFiles\toto12\ABC.txt"; StreamReader reader = new StreamReader(path); var test = reader.ReadLine(); reader.Close(); } Writer: private void button1_Click(object sender, EventArgs e) { IsolatedStorageFile isolatedFile = IsolatedStorageFile.GetMachineStoreForAssembly(); isolatedFile.CreateDirectory("toto12"); IsolatedStorageFileStream isolatedStorage = new IsolatedStorageFileStream(@"toto12\ABC.txt", System.IO.FileMode.Create, isolatedFile); StreamWriter writer = new StreamWriter(isolatedStorage); writer.WriteLine("Ana 2akol we ashrab kai a3eesh wa akbora"); writer.Close(); writer.Dispose(); }

    Read the article

  • How to create a lookup column that targets a Doc Lib and uses the 'Name' of the document?

    - by stlawrence
    How do you create a lookup column to a Document Library that uses the 'Name' of the document as the lookup value? I found a blog post that recommends adding another custom field like "FileName" and then using a item reciever to populate the custom field with the value from the Name field but that seems cheesy. Link to the blog in case people are interested: http://blogs.msdn.com/pranab/archive/2008/01/08/sharepoint-2007-moss-wss-issue-with-lookup-column-to-doc-lib-name-field.aspx I've got a bunch of custom document content types that I dont want to clutter with a work around that should really work anyway.

    Read the article

  • Can't edit a specific document in Word 2007

    - by Benjotron
    I have a document in Word 2007 that seems to be read only. There are forms in the document that I can type in, but I can't edit or reformat the rest of the document. There is probably a setting somewhere I can flip to make it editable again but I can't find it for the life of me. FOLLOW UP: The "Protect Document" button only had "Unrestricted Access" checked, this was one of the first things I checked. However, when I tried checking "Restrict Formatting and Editing" it brought up the Restrict Formatting and Editing sidebar, which stated: This document is protected from unintentional editing. You may only fill in forms in this region. With a stop protection button on the bottom, which of course solved the problem. I think that menu item just has a bad name, it should be "Restrict Formatting and Editing Options or Settings"

    Read the article

  • How to save a document and close it after the save

    - by Le Questione
    Currently I have a .bat file that opens a program, opens a document and then reloads this document so it contains new data. After that I need a timeout so it can run some other things within the document which takes about 2 minutes in total, but for safety I used 5 minutes. Then I want it to save the document and close the program. I've searched a lot for this but I can't find anything that saves the document and then closes it. Only can find how to save a bat file, but I already have that! CD /D "D:\01 - Config\02 - Developer" start qv.exe /l "D:\02 - Content\07 - Accesspoint\tracktrace.qvw" timeout /t 300 ???? exit 0 Please advise, thanks

    Read the article

  • Excel file has disappeared from sharepoint document library

    - by user40389
    Two days ago an excel came up missing from a document library. This document library only had this file and nows it's gone. When I go to All Site Content-Document Librarys it shows that there is still one file in the library. Seems like there is something screwy. Is there anything I can do to get this item to reappear? MOSS2007 Recycle: Must be default never changed anything and really don't know how to find the settings for this Document Library Settings Versioning Settings: content approval: no document version history: create major versions require check out: yes

    Read the article

  • Document should be editable but not be able to be saved

    - by user359406
    I have a word document in 2010. The document is a series of questions to be asked to the customer. All the technicians open the document while talking to the customer. Technicians should be able to write in the document but if they try to save it, it should prevent them from doing so. How is this possible. In word 2007, I kept a password for the document. so it gives an options to do ready only if you don't know the password.

    Read the article

  • The internal storage of a DATETIME2 value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare @now datetime2(7) = '2010-12-15 21:04:03.6934231'   select  cast(cast(@now as datetime2(0)) as binary(7)),         cast(cast(@now as datetime2(1)) as binary(7)),         cast(cast(@now as datetime2(2)) as binary(7)),         cast(cast(@now as datetime2(3)) as binary(8)),         cast(cast(@now as datetime2(4)) as binary(8)),         cast(cast(@now as datetime2(5)) as binary(9)),         cast(cast(@now as datetime2(6)) as binary(9)),         cast(cast(@now as datetime2(7)) as binary(9)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Original value ------  ----------  ------------  ------  ------  -------------------- 0x  00  442801             75844  A8330B  734120  0x00442801A8330B 0x  01  A5920B            758437  A8330B  734120  0x01A5920BA8330B  0x  02  71BA73           7584369  A8330B  734120  0x0271BA73A8330B 0x  03  6D488504        75843693  A8330B  734120  0x036D488504A8330B 0x  04  46D4342D       758436934  A8330B  734120  0x0446D4342DA8330B 0x  05  BE4A10C401    7584369342  A8330B  734120  0x05BE4A10C401A8330B 0x  06  6FEBA2A811   75843693423  A8330B  734120  0x066FEBA2A811A8330B 0x  07  57325D96B0  758436934231  A8330B  734120  0x0757325D96B0A8330B Let us use the following color schema Red - Prefix Green - Time part Blue - Day part What you can see is that the date part is equal in all cases, which makes sense since the precision doesm't affect the datepart. What would have been fun, is datetime2(negative) just like round accepts a negative value. -1 would mean rounding to 10 second, -2 rounding to minute, -3 rounding to 10 minutes, -4 rounding to hour and finally -5 rounding to 10 hour. -5 is pretty useless, but if you extend this thinking to -6, -7 and so on, you could actually get a datetime2 value which is accurate to the month only. Well, enough ranting about this. Let's get back to the table above. If you add 75844 second to midnight, you get 21:04:04, which is exactly what you got in the select statement above. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • SQLBeat Podcast – Episode 4 – Mark Rasmussen on Machine Guns,Jelly Fish and SQL Storage Engine

    - by SQLBeat
    In this this 4th SQLBeat Podcast I talk with fellow Dane Mark Rasmussen on SQL, machine guns and jelly fish fights; apparently they are common in our homeland. Who am I kidding, I am not Danish, but I try to be in this podcast. Also, we exchange knowledge on SQL Server storage engine particulars as well as some other “internals” like password hashes and contained databases. And then it just gets weird and awesome. There is lots of background noise from people who did not realize we were recording. And I call them out and make fun of them as they deserve; well just one person who is well known in these parts. I also learn the correct (almost) pronunciation of “fjord”. Seriously, a word with an “F” followed by a “J”. And there are always the hippies and hipsters to discuss. Should be fun.

    Read the article

  • The internal storage of a DATETIMEOFFSET value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare    @now datetimeoffset(7) = '2010-12-15 21:04:03.6934231 +03:30'   select     cast(cast(@now as datetimeoffset(0)) as binary(9)),            cast(cast(@now as datetimeoffset(1)) as binary(9)),            cast(cast(@now as datetimeoffset(2)) as binary(9)),            cast(cast(@now as datetimeoffset(3)) as binary(10)),            cast(cast(@now as datetimeoffset(4)) as binary(10)),            cast(cast(@now as datetimeoffset(5)) as binary(11)),            cast(cast(@now as datetimeoffset(6)) as binary(11)),            cast(cast(@now as datetimeoffset(7)) as binary(11)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Suffix  Suffix  Original value ------  ----------  ------------  ------  ------  ------  ------  ------------------------ 0x  00  0CF700             63244  A8330B  734120  D200       210  0x000CF700A8330BD200 0x  01  75A609            632437  A8330B  734120  D200       210 0x0175A609A8330BD200 0x  02  918060           6324369  A8330B  734120  D200       210  0x02918060A8330BD200 0x  03  AD05C503        63243693  A8330B  734120  D200       210  0x03AD05C503A8330BD200 0x  04  C638B225       632502470  A8330B  734120  D200       210  0x04C638B225A8330BD200 0x  05  BE37F67801    6324369342  A8330B  734120  D200       210  0x05BE37F67801A8330BD200 0x  06  6F2D9EB90E   63243693423  A8330B  734120  D200       210  0x066F2D9EB90EA8330BD200 0x  07  57C62D4093  632436934231  A8330B  734120  D200       210  0x0757C62D4093A8330BD200 Let us use the following color schema Red - Prefix Green - Time part Blue - Day part Purple - UTC offset What you can see is that the date part is equal in all cases, which makes sense since the precision doesn't affect the datepart. If you add 63244 seconds to midnight, you get 17:34:04, which is the correct UTC time. So what is stored is the UTC time and the local time can be found by adding "utc offset" minutes. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • Storage of leftover values in a situation of having to round down

    - by jt0dd
    I'm writing an app (client and server side) where the number of sales required by each employee must be kept track of in round-number form. Each month, the employees are required to sell a certain number, and this app needs to keep track of how many sales must be made for each 12 hour interval during the work week. Because I have to round the values down to a whole number, I must keep track of leftovers in the rounding process and ensure that they are always carried over. My method must ensure the storage of the leftover value even when client and server side crash, restart, close, etc. Right now, I'm working on doing this by storing the leftovers in a field in the user's account row in the database each time a value is rounded, reading the stored value, removing any portion that is used (when a whole number is reached, most of the leftover is used up), and storing the new value. This practice seems weird because while the leftovers are calculated on the client side, it's the same number for each employee, and every employee using the app is storing a copy of the same leftover data. Alternatively, I could have all clients store the data at once into the same data field on a general table, but this is just as weird. Is there a better way that this can be handled or is my method correct?

    Read the article

  • Fastest way to document software architecture and design

    - by Karsten
    We are a small team of 5 developers and I'm looking for some great advices about how to document the software architecture and design. I'm going for the sweet spot, where the time invested pays off. I don't want to use more time documenting than necessary. I'll quickly give you my thoughts. What are the diagrams I should made? I'm thinking an overall diagram showing the various applications and services. And then some sequence diagrams showing the most important or complicated processes. About the code it self, I really don't see much value in describing or making diagrams for the code outside the .cs files them self. About text documents, I'm a bit uncertain about when to put down on paper. Most developers don't like to either write or read long documents.

    Read the article

  • Some Adsense domain's ads are causing document.write() statements that remove the html from the page

    - by er1234
    All that is output on the page is the domain name of the advertiser, for example 'www.solar-aid.org'. The rest of the content is stripped, I believe because of a document.write() statement. I'd like to know if this is a common issue or something wrong with our setup. There are three domains causing the issue, which we've blocked from Adsense as a result. solar-aid.org kiva.org grameenfoundation.org Given the type of organizations I think they may be within the default group of 'public service ads' within the Backup Ads setting. If the issue doesn't completely resolve itself soon (one customer of ours complained today, even though I blocked them 5+ days ago), I'll disable public service ads and select the 'fill space with a solid color' option.

    Read the article

  • A Simple Online Document Management System Using Asp.net MVC 3

    - by RazanPaul
    Nowadays we have a number of online file management systems (e.g. DropBox, SkyDrive and Google Drive). People can use them to manage different types of documents. However, one might need a system to manage documents when they do not want to publish the company documents to the cloud. In that case, they need to build an online document management system. This project is intended to meet this purpose. However, it is in the early stage. All the functionalities seem working. A lot of work is needed in the UI. Besides this, code needs refactoring. Please find the project at the following link: https://documentmanagementsystem.codeplex.com/

    Read the article

  • Reformating xml document

    - by Joseph Reeves
    I have an xml document in the format below: <key>value</key> <key>value</key> <key>value</key> But need to convert it to the following: <tag k='key' v='value' /> <tag k='key' v='value' /> <tag k='key' v='value' /> The original xml file is roughly 20,000 lines long, so I'm keen to automate as much as possible! I've looked at xmlstarlet, but drew a blank with it. Presumably it would be a good place to start though? Help gratefully received, thanks.

    Read the article

  • Fastest way to document software architecture and design

    - by Karsten
    We are a small team of 5 developers and I'm looking for some great advices about how to document the software architecture and design. I'm going for the sweet spot, where the time invested pays off. I don't want to use more time documenting than necessary. I'll quickly give you my thoughts. What are the diagrams I should made? I'm thinking an overall diagram showing the various applications and services. And then some sequence diagrams showing the most important or complicated processes. About the code it self, I really don't see much value in describing or making diagrams for the code outside the .cs files them self. About text documents, I'm a bit uncertain about when to put down on paper. Most developers don't like to either write or read long documents.

    Read the article

  • How to document/verify consistent layering?

    - by Morten
    I have recently moved to the dark side: I am now a CUSTOMER of software development -- mainly websites. With this new role comes new concerns. As a programmer i know how solid an application becomes when it is properly layered, and I want to use this knowledge in my new job. I don't want business logic in my presentation layer, and certainly not presentation stuff in my data layer. Thus, I want to be able to demand from my supllier that they document the level of layering, and how neat and consistent the layering is. The big question is: How is the level of layering documented to me as a customer, and is that a reasonable demmand for me to have, so I don't have to look in the code (I'm not supposed to do that anymore)?

    Read the article

  • How should calculations be handled in a document database

    - by Morten
    Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the "types" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store "models" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >