Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 361/1418 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • ORM in the realworld

    - by josh
    I am begining a new project that i think will last for some years. Am in the point of deciding the ORM framework to use (or whether to use one at all). Can anyone with experience tell me whether orm frameworks are used in realworld applications. The problem i have in mind is this: The orm tool will generate for me tables and columns etc as i create and modify my entities. However, after the project has gone live and is in production, certain database changes will not be possible. Can this hinder the advancement of the project. If i had used a framework like ibatis for example, i know i would only need to adjust the sql statements based on the database changes. Can someone tell me whether ORM tools have survived the live environment. At my office, we use java based ERP that was done long ago and it was never done using any ORM framework. Regards. Josh

    Read the article

  • i had problem in adding the additional content in my pdf

    - by Ayyappan.Anbalagan
    I am converting my data set into a pdf document.My data set contains the product bill details.So,at the top of the pdf i need to added some more content like "my company name & address customer name, date of bill,bill no" Below code i am using to convert into pdf. public static void Exportdata(DataTable dataTable, HttpResponse Response, int val) { //String filename = String.Concat(name, "-", DateTime.Today.Day.ToString(), "/", DateTime.Today.Month.ToString(), "/", DateTime.Today.Year.ToString(), ".pdf"); Document pdfDoc = new Document(PageSize.A4, 30, 30, 40, 25); System.IO.MemoryStream mStream = new System.IO.MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(pdfDoc, mStream); //int cols = 0; //int rows = 0; int cols = dataTable.Columns.Count; int rows = dataTable.Rows.Count; pdfDoc.Open(); iTextSharp.text.Table pdfTable = new iTextSharp.text.Table(cols, rows); pdfTable.BorderWidth = 1; pdfTable.Width = 100; pdfTable.Padding = 1; pdfTable.Spacing = 1; //creating table headers for (int i = 0; i < cols; i++) { Cell cellCols = new Cell(); Font ColFont = FontFactory.GetFont(FontFactory.HELVETICA, 8, Font.BOLD); Chunk chunkCols = new Chunk(dataTable.Columns[i].ColumnName, ColFont); cellCols.Add(chunkCols); pdfTable.AddCell(cellCols); } //creating table data (actual result) for (int k = 0; k < rows; k++) { for (int j = 0; j < cols; j++) { Cell cellRows = new Cell(); Font RowFont = FontFactory.GetFont(FontFactory.HELVETICA, 6); Chunk chunkRows = new Chunk(dataTable.Rows[k][j].ToString(), RowFont); cellRows.Add(chunkRows); pdfTable.AddCell(cellRows); } } pdfDoc.Add(pdfTable); pdfDoc.Close(); Response.ContentType = "application/octet-stream"; if (val == 1) { Response.AddHeader("Content-Disposition", "attachment; filename=Users.pdf"); } else if (val == 2) { Response.AddHeader("Content-Disposition", "attachment; filename=Customers.pdf"); } else if (val == 3) { Response.AddHeader("Content-Disposition", "attachment; filename=Materials.pdf"); } else { Response.AddHeader("Content-Disposition", "attachment; filename=Reports.pdf"); } Response.Clear(); Response.BinaryWrite(mStream.ToArray()); //Response.Write(mStream.ToString()); HttpContext.Current.ApplicationInstance.CompleteRequest(); Response.End(); }

    Read the article

  • C# XPath Not Finding Anything

    - by ehdv
    I'm trying to use XPath to select the items which have a facet with Location values, but currently my attempts even to just select all items fail: The system happily reports that it found 0 items, then returns (instead the nodes should be processed by a foreach loop). I'd appreciate help either making my original query or just getting XPath to work at all. XML <?xml version="1.0" encoding="UTF-8" ?> <Collection Name="My Collection" SchemaVersion="1.0" xmlns="http://schemas.microsoft.com/collection/metadata/2009" xmlns:p="http://schemas.microsoft.com/livelabs/pivot/collection/2009" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <FacetCategories> <FacetCategory Name="Current Address" Type="Location"/> <FacetCategory Name="Previous Addresses" Type="Location" /> </FacetCategories> <Items> <Item Id="1" Name="John Doe"> <Facets> <Facet Name="Current Address"> <Location Value="101 America Rd, A Dorm Rm 000, Chapel Hill, NC 27514" /> </Facet> <Facet Name="Previous Addresses"> <Location Value="123 Anywhere Ln, Darien, CT 06820" /> <Location Value="000 Foobar Rd, Cary, NC 27519" /> </Facet> </Facets> </Item> </Items> </Collection> C# public void countItems(string fileName) { XmlDocument document = new XmlDocument(); document.Load(fileName); XmlNode root = document.DocumentElement; XmlNodeList xnl = root.SelectNodes("//Item"); Console.WriteLine(String.Format("Found {0} items" , xnl.Count)); } There's more to the method than this, but since this is all that gets run I'm assuming the problem lies here. Calling root.ChildNodes accurately returns FacetCategories and Items, so I am completely at a loss. Thanks for your help!

    Read the article

  • Struts and logging HTTP POST request body

    - by Ivan Vrtaric
    I'm trying to log the raw body of HTTP POST requests in our application based on Struts, running on Tomcat 6. I've found one previous post on SO that was somewhat helpful, but the accepted solution doesn't work properly in my case. The problem is, I want to log the POST body only in certain cases, and let Struts parse the parameters from the body after logging. Currently, in the Filter I wrote I can read and log the body from the HttpServletRequestWrapper object, but after that Struts can't find any parameters to parse, so the DispatchAction call (which depends on one of the parameters from the request) fails. I did some digging through Struts and Tomcat source code, and found that it doesn't matter if I store the POST body into a byte array, and expose a Stream and a Reader based on that array; when the parameters need to get parsed, Tomcat's Request object accesses its internal InputStream, which has already been read by that time. Does anyone have an idea how to implement this kind of logging correctly?

    Read the article

  • UINavigationController from UIViewController

    - by 4thSpace
    I currently have this workflow in a tab based app: Tab1 loads... ViewOne : UIViewController >> PickerView : UIViewController >> DetailView : UIViewController "" means loads based on user action. I'd like navigation bars on PickerView and DetailView. PickerView just needs a cancel button in the top left of its nav bar. DetailView needs the normal navbar back button. I already have PickerView's nav bar wired up through IB and working. I'm not sure what to do with PickerView's nav bar. PickerView is also loaded from Tab2, who's main view starts as a UINavigationController. PickerView's nav bar works fine in that case. ViewOne should not have a navigation bar. Any ideas?

    Read the article

  • how to strip namespaces with e4x?

    - by SorcyCat
    I have an arbitrary XML document provided by a URL. I also have an xpath-like expressions. var xml = <doc><node1><node2><node3>some value</node3></node2></node1></doc>; var path = "node1.node2.node3"; I need to verify if the above path into the XML is valid. I tried to do this using eval and E4X. var value = eval("xml."+path); However, my actual xml document has namespaces which are getting in the way. I do not know the namespaces ahead of time or care what they are. Is there a way to strip all the namespaces out ahead of time? Is there a better way to do this? Thanks!

    Read the article

  • In WPF RichTextBox, does overriding of Underline/Strikethrough work?

    - by Daniel Earwicker
    In a WPF RichTextBox, the effective style of a Run of text is a result of combining the properties defined on the Run with the properties it "inherits" from the enclosing Paragraph and finally the styles on the Document. So you can set FontWeight to Bold at any of those levels. You can also set it Bold on the Paragraph and then switch it to Normal (override it) for a specific Run. However, underline and strikethrough are different. They are items that can optionally appear in a list of TextDecorations, which is a property of Inline (and hence Run) and of Paragraph, but not of Document. And you can switch on Underline in the Paragraph, and it gets inherited so that all Runs within that Paragraph default appear underlined by default. Is it possible to switch it off underline in a specific Run? i.e. is there a way to insert an entry into the list of TextDecorations which would mean "Don't underline", thus overriding the Paragraph's setting?

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • Save scrollbar Div, in a MasterPage

    - by Luis
    have got succesfull done it in the content pages, but in the masterpage, I am not able to done it. I tried this way, it was like I did for content pages. <script type="text/javascript"> // This Script is used to maintain Grid Scroll on Partial Postback var scrollTop; //Register Begin Request and End Request Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler); //Get The Div Scroll Position function BeginRequestHandler(sender, args) { var m = document.getElementById('<%=PanelMenuLeft.ClientID%>'); m.scrollTop = scrollTop; } //Set The Div Scroll Position function EndRequestHandler(sender, args) { var m = document.getElementById('<%=PanelMenuLeft.ClientID%>'); m.scrollTop = scrollTop; } </script>

    Read the article

  • Jquery tabs and IE8

    - by Stephen
    hi, i am using jquery to create the following using this code $(document).ready(function() { $("#content").tabs({ fx: { opacity: 'toggle' } }); }); $(document).ready(function() { $("#documents").tabs({ fx: { opacity: 'toggle' } }); }); Here it is in firefox working like in every other browser like this: https://www.completelettingsolutions.co.uk/working.JPG but in IE 8 ... it does this but in 7 its fine. (remove the 1 at the start of the link) 1https://www.completelettingsolutions.co.uk/notWorking.JPG I think it is somethin to do with the script opacity but i cant get it to work. does any one have any idea. Cheers

    Read the article

  • Postgresql - one database for everyone, or one-database per customer

    - by user337876
    I'm working on a web-based business application where each customer will need to have their own data (think basecamphq.com type model) For scalability and ease-of-upgrades, I'd prefer to have a single database where each customer gets a filtered version of the data. The problem is how to guarantee that they stay sandboxed to their own data. Trying to enforce it in code seems like a disaster waiting to happen. I know Oracle has a way to append a where clause to every query based on a login id, but does Postgresql have anything similar? If not, is there a different design pattern I could use (like creating a view of each table for each customer that filters)? Worse case scenario, what is the performance/memory overhead of having 1000 100M databases vs having a single 1Tb database? I will need to provide backup/restore functionality on a per-customer basis which is dead-simple on a single database but quite a bit trickier if they are sharing the database with other customers.

    Read the article

  • string matching algorithms used by lucene

    - by iamrohitbanga
    i want to know the string matching algorithms used by Apache Lucene. i have been going through the index file format used by lucene given here. it seems that lucene stores all words occurring in the text as is with their frequency of occurrence in each document. but as far as i know that for efficient string matching it would need to preprocess the words occurring in the Documents. example: search for "iamrohitbanga is a user of stackoverflow" (use fuzzy matching) in some documents. it is possible that there is a document containing the string "rohit banga" to find that the substrings rohit and banga are present in the search string, it would use some efficient substring matching. i want to know which algorithm it is. also if it does some preprocessing which function call in the java api triggers it.

    Read the article

  • OAuth request token for an installed application

    - by Andres
    Hi all I'm trying to use/understand Google request token mechanism. I intend to use it for an application I've start to develop to access Orkut data using OpenSocial API. I read this document that explains the steps to obtain a token for an installed application. This document tells you to use the OAuthGetRequestToken method from Google OAuth API to acquire a request token . Accessing the manual of this function (available here). But the parameter oauth_consumer_key, which is required, asks for the "Domain identifying the third-party web application", but I don,t have a domain, it is an installed application. So my question is, what should I put in this parameter in that case? I'm using oauth_playground to run my tests. Thx

    Read the article

  • App engine datastore - query on Enum fields.

    - by Gopi
    I am using GAE(Java) with JDO for persistence. I have an entity with a Enum field which is marked as @Persistent and gets saved correctly into the datastore (As observed from the Datastore viewer in Development Console). But when I query these entities putting a filter based on the Enum value, it is always returning me all the entities whatever value I specify for the enum field. I know GAE java supports enums being persisted just like basic datatypes. But does it also allow retrieving/querying based on them? Google search could not point me to any such example code. Details: I have printed the Query just before being executed. So in two cases the query looks like - SELECT FROM com.xxx.yyy.User WHERE role == super ORDER BY key desc RANGE 0,50 SELECT FROM com.xxx.yyy.User WHERE role == admin ORDER BY key desc RANGE 0,50 Both above queries return me all the User entities from datastore in spite of datastore viewer showing some Users are of type 'admin' and some are of type 'super'.

    Read the article

  • Xcode custom project template creates only references to files

    - by user315374
    Hi, I tried to create a custom project template for setting up unit testing. The problem is that when i create a new project based on this template it creates references to the template files : When i edit a file, it changes my template files instead of my actual project files ! When i delete my template, files from my actual project becomes red ! The project template is in : /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Project Templates/Application/Test-based Application Reading some question on stack overflow i tried to install my template project in /Library/Application Support/Developer/Shared/Xcode/Project Templates But i cannot see my template when i create a new project. Can anybody help ? Thanks, Vincent

    Read the article

  • Does setting an onload event for a <script> tag work consistently in modern browsers?

    - by Matchu
    I observe that placing the following in an external script file has the desired effect in my copies of Firefox and Google Chrome: var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src', 'http://www.example.com/external_script.js'); s.onload = function () { doSomethingNowThatExternalScriptHasLoaded } document.getElementsByTagName('body')[0].appendChild(s); It adds a an external script tag to them DOM, and attaches a function to the tag for when the script has loaded. I'm having trouble testing in Internet Explorer right now, but I'm not sure if it's related to that addition in particular, or something else. Does this method work in the more modern versions of other browsers, including IE7/8? If not, how else could I go about this?

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Free static checker for C99 code

    - by detly
    I am looking for a free static checker for C99 code (including GCC extensions) with the ability to explicitly say "these preprocessor macros are always defined." I need that last part because I am compiling embedded code for a single target processor. The compiler (Microchip's C32, GCC based) sets a macro based on the selected processor, which is then used in the PIC32 header files to select a processor-specific header file to include. cppcheck therefore fails because it detects the 30 different #ifdefs used to select one of the many possible PIC32 processors, tries to analyse all possible combinations of these plus all other #defines, and fails. For example, if splint could process C99 code, I would use splint -D__PIC32_FEATURE_SET__=460 -D__32MX460F512L__ \ -D__LANGUAGE_C__ -I/path/to/my/includes source.c

    Read the article

  • How to pass -f specdoc option through rake task

    - by dorelal
    I am using rails 2.3.5 .rake spec works fine. This is from spec --help. spec --help -f, --format FORMAT[:WHERE] Specifies what format to use for output. Specify WHERE to tell the formatter where to write the output. All built-in formats expect WHERE to be a file name, and will write to $stdout if it's not specified. The --format option may be specified several times if you want several outputs Builtin formats: silent|l : No output progress|p : Text-based progress bar profile|o : Text-based progress bar with profiling of 10 slowest examples specdoc|s : Code example doc strings nested|n : Code example doc strings with nested groups indented html|h : A nice HTML report failing_examples|e : Write all failing examples - input for --example failing_example_groups|g : Write all failing example groups - input for --example How do I pass -f specdoc through rake task.

    Read the article

  • "return false" is ignored in certain browsers for link added dynamically to the DOM with JavaScript

    - by AlexV
    I dynamically add an <a> (link) tag to the DOM with: var link = document.createElement('a'); link.href = 'http://www.google.com/'; link.onclick = function () { window.open(this.href); return false; }; link.appendChild(document.createTextNode('Google')); //someDomNode.appendChild(link); I want the link to open in a new window (I know it's bad, but it's required) and I don't want to use the target attribute. My code works well in IE and Firefox, but the return false don't work in Safari, Chrome and Opera. By don't work I mean the link is followed after the new window is opened.

    Read the article

  • Rails choking on the content of this request because of protect_from_forgery

    - by randombits
    I'm trying to simply test my RESTful API with cURL. Using the following invocation: curl -d "name=jimmy" -H "Content-Type: application/x-www-form-urlencoded" http://127.0.0.1:3000/people.xml -i Rails is dying though: ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): :8:in `synchronize' Looks like it's running this through a protect_from_forgery filter. I thought protect_from_forgery is excluded for non-HTML HTTP POST/PUT/DELETE type requests? This is clearly targeting the XML format. If I pass actual XML content, it works. But my users will be submitting POST data as URL encoded parameters. I know all the various ways I can disable protect_from_forgery but what's the proper way of handling this? I want to leave it on so that when I do have HTML based forms and handle format.html, I don't forget to re-enable it for then. I want users to be able to make HTTP POST requests to my XML-based API though and not get bombarded with this.

    Read the article

  • Using python to play two sin tones at once

    - by Alex
    Im using python to a sine tone. the tone is based off the computers internal time in minutes, but id like to simultaneously play one based off the second for a harmonized or dualing sound. This is what I have so far can someone point me in the right direction. from struct import pack from math import sin, pi import time def au_file(name, freq, dur, vol): fout = open(name, 'wb') # header needs size, encoding=2, sampling_rate=8000, channel=1 fout.write('.snd' + pack('>5L', 24, 8*dur, 2, 8000, 1)) factor = 2 * pi * freq/8000 # write data for seg in range(8 * dur): # sine wave calculations sin_seg = sin(seg * factor) fout.write(pack('b', vol * 127 * sin_seg)) fout.close() t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis * 100 tim = float(ti) tim = tim * 100 if name == 'main': au_file(name='timeSound1.au', freq = tim, dur=1000, vol=1.0) import os os.startfile('timeSound1.au')

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >