Search Results

Search found 12511 results on 501 pages for 'itunes store'.

Page 300/501 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • boost::serialization of mutual pointers

    - by KneLL
    First, please take a look at these code: class Key; class Door; class Key { public: int id; Door *pDoor; Key() : id(0), pDoor(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pDoor); } }; class Door { public: int id; Key *pKey; Door() : id(0), pKey(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pKey); } }; BOOST_CLASS_TRACKING(Key, track_selectively); BOOST_CLASS_TRACKING(Door, track_selectively); int main() { Key k1, k_in; Door d1, d_in; k1.id = 1; d1.id = 2; k1.pDoor = &d1; d1.pKey = &k1; // Save data { wofstream f1("test.xml"); boost::archive::xml_woarchive ar1(f1); // !!!!! (1) const Key *pK = &k1; const Door *pD = &d1; ar1 << BOOST_SERIALIZATION_NVP(pK) << BOOST_SERIALIZATION_NVP(pD); } // Load data { wifstream i1("test.xml"); boost::archive::xml_wiarchive ar1(i1); // !!!!! (2) A *pK = &k_in; B *pD = &d_in; // (2.1) //ar1 >> BOOST_SERIALIZATION_NVP(k_in) >> BOOST_SERIALIZATION_NVP(d_in); // (2.2) ar1 >> BOOST_SERIALIZATION_NVP(pK) >> BOOST_SERIALIZATION_NVP(pD); } } The first (1) is a simple question - is it possible to pass objects to archive without pointers? If simply pass objects 'as is' that boost throws exception about duplicated pointers. But I'm confused of creating pointers to save objects. The second (2) is a real trouble. If comment out string after (2.1) then boost will corectly load a first Key object (and init internal Door pointer pDoor), but will not init a second Door (d_in) object. After this I have an inited *k_in* object with valid pointer to Door and empty *d_in* object. If use string (2.2) then boost will create two Key and Door objects somewhere in memory and save addresses in pointers. But I want to have two objects *k_in* and *d_in*. So, if I copy a values of memory objects to local variables then I store only addresses, for example, I can write code after (2.2): d_in.id = pD->id; d_in.pKey = pD->pKey; But in this case I store only a pointer and memory object remains in memory and I cannot delete it, because *d_in.pKey* will be unvalid. And I cannot perform a deep copy with operator=(), because if I write code like this: Key &operator==(const Key &k) { if (this != &k) { id = k.id; // call to Door::operator=() that calls *pKey = *d.pKey and so on *pDoor = *k.pDoor; } return *this; } then I will get a something like recursion of operator=()s of Key and Door. How to implement proper serialization of such pointers?

    Read the article

  • Android Dev Help: Saving an image from Res/raw or Asset folder to the Sd card

    - by Lucy
    Android Development Query Hello, I wonder if anyone could help me, i am trying to save an image (jpg or png) from the res/raw or assets folder to the SD card location (/sdcard/DCIM/). I have been following a tutorial which can save an image from a URL to the SD card Root, but i have looked everywhere to be able to save from res/raw or asset folder instead, and to a differnet location onthe sd card /sdcard/DCIM/ Here is the code, can anyone show me how to do the above from this? Thanks Lucy public class home extends Activity { private File file; private String imgNumber; private Button btnDownload; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); btnDownload=(Button)findViewById(R.id.btnDownload); btnDownload.setOnClickListener(new OnClickListener() { public void onClick(View v) { btnDownload.setText("Download is in Progress."); String savedFilePath=Download("http://www.domain.com/android1.png"); Toast.makeText(getApplicationContext(), "File is Saved in "+savedFilePath, 1000).show(); if(savedFilePath!=null) { btnDownload.setText("Download Completed."); } } }); } public String Download(String Url) { String filepath=null; try { //set the download URL, a url that points to a file on the internet //this is the file to be downloaded URL url = new URL(Url); //create the new connection HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection(); //set up some things on the connection urlConnection.setRequestMethod("GET"); urlConnection.setDoOutput(true); //and connect! urlConnection.connect(); //set the path where we want to save the file //in this case, going to save it on the root directory of the //sd card. File SDCardRoot = Environment.getExternalStorageDirectory(); //create a new file, specifying the path, and the filename //which we want to save the file as. String filename= "download_"+System.currentTimeMillis()+".png"; // you can download to any type of file ex:.jpeg (image) ,.txt(text file),.mp3 (audio file) Log.i("Local filename:",""+filename); file = new File(SDCardRoot,filename); if(file.createNewFile()) { file.createNewFile(); } //this will be used to write the downloaded data into the file we created FileOutputStream fileOutput = new FileOutputStream(file); //this will be used in reading the data from the internet InputStream inputStream = urlConnection.getInputStream(); //this is the total size of the file int totalSize = urlConnection.getContentLength(); //variable to store total downloaded bytes int downloadedSize = 0; //create a buffer... byte[] buffer = new byte[1024]; int bufferLength = 0; //used to store a temporary size of the buffer //now, read through the input buffer and write the contents to the file while ( (bufferLength = inputStream.read(buffer)) > 0 ) { //add the data in the buffer to the file in the file output stream (the file on the sd card fileOutput.write(buffer, 0, bufferLength); //add up the size so we know how much is downloaded downloadedSize += bufferLength; //this is where you would do something to report the prgress, like this maybe Log.i("Progress:","downloadedSize:"+downloadedSize+"totalSize:"+ totalSize) ; btnDownload.setText("download Status:"+downloadedSize+" / "+totalSize); } //close the output stream when done fileOutput.close(); if(downloadedSize==totalSize) filepath=file.getPath(); //catch some possible errors... } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { filepath=null; btnDownload.setText("Internet Connection Failed.\n"+e.getMessage()); e.printStackTrace(); } Log.i("filepath:"," "+filepath) ; return filepath; } }

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Clickonce installation fails after addition of WCF service project

    - by Ant
    So I have a winform solution, deployed via clickonce. Eveything worked fine until i added a WCF project. (see error in parsing the manifest file at end of post) Now I notice that MSBuild compiles the service into a _PublishedWebsites dir. I don't know what the need for this is, but I am suspecting this is the cause of the problem. This wcf project references some other projects within the solution. I am actually hosting the wcf service within the application so I don't really need MSBuild to do all this for me. Any ideas? ===================================================================================== PLATFORM VERSION INFO Windows : 5.1.2600.131072 (Win32NT) Common Language Runtime : 2.0.50727.3603 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3603 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : file:///C:/applications/abc/dev/abc.Application.application IDENTITIES Deployment Identity : Flow Management System.app, Version=1.4.0.0, Culture=neutral, PublicKeyToken=8453086392175e0f, processorArchitecture=msil APPLICATION SUMMARY * Installable application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of C:\applications\abc\dev\abc.Application.application resulted in exception. Following failure messages were detected: + Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. + Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: + Exception from HRESULT: 0x80070C81 COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [12/03/2010 6:33:53 PM] : Activation of C:\applications\abc\dev\abc.Application.application has started. * [12/03/2010 6:33:53 PM] : Processing of deployment manifest has successfully completed. * [12/03/2010 6:33:53 PM] : Installation of the application has started. ERROR DETAILS Following errors were detected during this operation. * [12/03/2010 6:33:53 PM] System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) at System.Deployment.Application.DownloadManager.DownloadManifest(Uri& sourceUri, String targetPath, IDownloadNotification notification, DownloadOptions options, ManifestType manifestType, ServerInformation& serverInformation) at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, IDownloadNotification notification, DownloadOptions options, Uri& appSourceUri, String& appManifestPath) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) at System.Deployment.Application.Manifest.AssemblyManifest..ctor(FileStream fileStream) at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) --- Inner Exception --- System.Runtime.InteropServices.COMException - Exception from HRESULT: 0x80070C81 - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.CreateCMSFromXml(Byte[] buffer, UInt32 bufferSize, IManifestParseErrorCallback Callback, Guid& riid) at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URL to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - starts parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you would like to cover even more links, but I've decided to omit other characters because I can't remember a link that would start with some other character 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces was present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: ${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending} I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like for my links to be replaced with shortenings as well. So when user copies a very long links (for instance if they would copy a link from google maps that usually generates long links I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. Does the replace string support notations like that so I can stil use a singe Regex.Replace() call?

    Read the article

  • java - BigDecimal

    - by Mk12
    I was trying to make my own class for currencies using longs, but Apparently I should use BigDecimal (and then whenever I print it just add the $ sign before it). Could someone please get me started? What would be the best way to use BigDecimals for Dollar currencies, like making it at least but no more than 2 decimal places for the cents, etc. The api for BigDecimal is huge, and I don't know which methods to use. Also, BigDecimal has better precision, but isn't that all lost if it passes through a double? if I do new BigDecimal(24.99), how will it be different than using a double? Or should I use the constructor that uses a String instead? EDIT: I decided to use BigDecimals, and then use: private static final java.text.NumberFormat moneyt = java.text.NumberFormat.getCurrencyInstance(); { money.setRoundingMode(RoundingMode.HALF_EVEN); } and then whenever I display the BigDecimals, to use money.format(theBigDecimal). Is this alright? Should I have the BigDecimal rounding it too? Because then it doesn't get rounded before it does another operation.. if so, could you show me how? And how should I create the BigDecimals? new BigDecimal("24.99") ? Well, after many comments to Vineet Reynolds (thanks for keeping coming back and answering), this is what I have decided. I use BigDecimals and a NumberFormat. Here is where I create the NumberFormat instance. private static final NumberFormat money; static { money = NumberFormat.getCurrencyInstance(Locale.CANADA); money.setRoundingMode(RoundingMode.HALF_EVEN); } Here is my BigDecimal: private final BigDecimal price; Whenever I want to display the price, or another BigDecimal that I got through calculations of price, I use: money.format(price) to get the String. Whenever I want to store the price, or a calculation from price, in a database or in a field or anywhere, I use (for a field): myCalculatedResult = price.add(new BigDecimal("34.58")).setScale(2, RoundingMode.HALF_EVEN); .. but I'm thinking now maybe I should not have the NumberFormat round, but when I want to display do this: System.out.print(money.format(price.setScale(2, RoundingMode.HALF_EVEN); That way to ensure the model and things displayed in the view are the same. I don't do: price = price.setScale(2, RoundingMode.HALF_EVEN); Because then it would always round to 2 decimal places and wouldn't be as precise in calculations. So its all solved now, I guess. But is there any shortcut to typing calculatedResult.setScale(2, RoundingMode.HALF_EVEN) all the time? All I can think of is static importing HALF_EVEN... EDIT: I've changed my mind a bit, I think if I store a value, I won't round it unless I have no more operations to do with it, e.g. if it was the final total that will be charged to someone. I will only round things at the end, or whenever necessary, and I will still use NumberFormat for the currency formatting, but since I always want rounding for display, I made a static method for display: public static String moneyFormat(BigDecimal price) { return money.format(price.setScale(2, RoundingMode.HALF_EVEN)); } So values stored in variables won't be rounded, and I'll use that method to display prices.

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • Rendering a view to a string in MVC, then redirecting -- workarounds?

    - by James S
    Hi -- I can't render a view to a string and then redirect, despite this answer from Feb (after version 1.0, I think) that claims it's possible. I thought I was doing something wrong, and then I read this answer from Haack in July that claims it's not possible. If somebody has it working and can help me get it working, that's great (and I'll post code, errors). However, I'm now at the point of needing workarounds. There are a few, but nothing ideal. Has anybody solved this, or have any comments on my ideas? This is to render email. While I can surely send the email outside of the web request (store info in a db and get it later), there are many types of emails and I don't want to store the template data (user object, a few other LINQ objects) in a db to let it get rendered later. I could create a simpler, serializable POCO and save that in the db, but why? ... I just want rendered text! I can create a new RedirectToAction object that checks if the headers have been sent (can't figure out how to do this -- try/catch?) and, if so, builds out a simple page with a meta redirect, a javascript redirect, and also a "click here" link. Within my controller, I can remember if I've rendered an email and, if so, manually do #2 by displaying a view. I can manually send the redirect headers before any potential email rendering. Then, rather than using the MVC infrastructure to redirecttoaction, I just call result.end. This seems easiest, but really messy. Anything else? EDIT: I've tried Dan's code (very similar to the code from Jan/Feb that I've already tried) and I'm still getting the same error. The only substantial difference I can see is that his example uses a view while I use a partial view. I'll try testing this later with a view. Here's what I've got: Controller public ActionResult Certifications(string email_intro) { //a lot of stuff ViewData["users"] = users; if (isPost()) { //create the viewmodel var view_model = new ViewModels.Emails.Certifications.Open(userContext) { emailIntro = email_intro }; //i've tried stopping this after just one iteration, in case the problem is due to calling it multiple times foreach (var user in users) { if (user.Email_Address.IsValidEmailAddress()) { //add more stuff to the view model specific to this user view_model.user = user; view_model.certification302Summary.subProcessesOwner = new SubProcess_Certifications(RecordUpdating.Role.Owner, null, null, user.User_ID, repository); //more here.... //if i comment out the next line, everything works ok SendEmail(view_model, this.ControllerContext); } } return RedirectToAction("Certifications"); } return View(); } SendEmail() public static void SendEmail(ViewModels.Emails.Certifications.Open model, ControllerContext context) { var vd = context.Controller.ViewData; vd["model"] = model; var renderer = new CustomRenderers(); //i fixed an error in your code here var text = renderer.RenderViewToString3(context, "~/Views/Emails/Certifications/Open.ascx", "", vd, null); var a = text; } CustomRenderers public class CustomRenderers { public virtual string RenderViewToString3(ControllerContext controllerContext, string viewPath, string masterPath, ViewDataDictionary viewData, TempDataDictionary tempData) { //copy/paste of dan's code } } Error [HttpException (0x80004005): Cannot redirect after HTTP headers have been sent.] System.Web.HttpResponse.Redirect(String url, Boolean endResponse) +8707691 Thanks, James

    Read the article

  • Extjs4: editable rowbody

    - by peter
    in my first ExtJs4 project i use a editable grid with the feature rowbody to have a big textfield displayed under each row. I want it to be editable on a dblclick. I succeeded in doing so by replacing the innerHTML of the rowbody by a textarea but the special keys don't do what they are supposed to do (move the cursor). If a use the textarea in a normal field i don't have this probem. Same problem in IE7 and FF4 gridInfo = Ext.create('Ext.ux.LiveSearchGridPanel', { id: 'gridInfo', height: '100%', width: '100%', store: storeInfo, columnLines: true, selType: 'cellmodel', columns: [ {text: "Titel", flex: 1, dataIndex: 'titel', field:{xtype:'textfield'}}, {text: "Tags", id: "tags", flex: 1, dataIndex: 'tags', field:{xtype:'textfield'}}, {text: "Hits", dataIndex: 'hits'}, {text: "Last Updated", renderer: Ext.util.Format.dateRenderer('d/m/Y'), dataIndex: 'lastChange'} ], plugins: [ Ext.create('Ext.grid.plugin.CellEditing', { clicksToEdit: 1 }) ], features: [ { ftype: 'rowbody', getAdditionalData: function(data, idx, record, orig) { var headerCt = this.view.headerCt, colspan = headerCt.getColumnCount(); return { rowBody: data.desc, //the big textfieldvalue, can't use a textarea here 8< rowBodyCls: this.rowBodyCls, rowBodyColspan: colspan }; } }, {ftype: 'rowwrap'} ] }); me.on('rowbodydblclick', function(gridView, el, event, o) { ... rb = td.down('.x-grid-rowbody').dom; var value = rb.innerText?rb.innerText:rb.textContent; rb.innerHTML = ''; Ext.create('Ext.form.field.TextArea', { id: 'textarea1', value : value, renderTo: rb, border: false, enterIsSpecial: true, enableKeyEvents: true, disableKeyFilter: true, listeners: { 'blur': function(el, o) { rb.innerHTML = el.value; }, 'specialkey': function(field, e){ console.log(e.keyCode); //captured but nothing happens } } }).show(); damn, can't publish my own solution, looks like somebody else has to answer, anyway, here is the function that works function editDesc(me, gridView, el, event, o) { var width = Ext.fly(el).up('table').getWidth(); var rb = event.target; var value = rb.innerText?rb.innerText:rb.textContent; rb.innerHTML = '' var txt = Ext.create('Ext.form.field.TextArea', { value : value, renderTo: rb, border: false, width: width, height: 300, enterIsSpecial: true, disableKeyFilter: true, listeners: { 'blur': function(el, o) { var value = el.value.replace('\n','<br>') rb.innerHTML = value; }, 'specialkey': function(field, e){ e.stopPropagation(); } } }); var txtTextarea = Ext.fly(rb).down('textarea'); txtTextarea.dom.style.color = 'blue'; txtTextarea.dom.style.fontSize = '11px'; } Hi Molecule Man, as an alternative to the approach above i tried the Ext.Editor. It works but i want it inline but when i render it to the rowbody, the field blanks and i have no editor, any ideas ? gridInfo = Ext.create('Ext.grid.Panel', { id: 'gridInfo', height: '100%', width: '100%', store: storeInfo, columnLines: true, selType: 'cellmodel', viewConfig: {stripeRows: false, trackOver: true}, columns: [ {text: "Titel", flex: 1, dataIndex: 'titel', field:{xtype:'textfield'}}, ... {text: "Last Updated", renderer: Ext.util.Format.dateRenderer('d/m/Y'), dataIndex: 'lastChange'} ], plugins: [ Ext.create('Ext.grid.plugin.CellEditing', { clicksToEdit: 1 }) ], features: [ { ftype: 'rowbody', getAdditionalData: function(data, idx, record, orig) { var headerCt = this.view.headerCt, colspan = headerCt.getColumnCount(); return { rowBody: data.desc, rowBodyCls: this.rowBodyCls, rowBodyColspan: colspan }; } } ], listeners:{ rowbodyclick: function(gridView, el, event) { //werkt editDesc(this, gridView, el, event); } } }); function editDesc(me, gridView, el, event, o) { var rb = event.target; me.txt = new Ext.Editor({ field: {xtype: 'textarea'}, updateEl: true, cancelOnEsc: true, floating: true, renderTo: rb //when i do this, the field becomes empty and i don't get the editor }); me.txt.startEdit(el); }

    Read the article

  • Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/xmlbeans/XmlException

    - by Dheeraj kumar
    I have to read xls file in java.I used poi-3.6 to read xls file in Eclipse.But i m getting this ERROR"Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/xmlbeans/XmlException at ReadExcel2.main(ReadExcel2.java:38)". I have added following jars 1)poi-3.6-20091214.jar 2)poi-contrib-3.6-20091214.jar 3)poi-examples-3.6-20091214.jar 4)poi-ooxml-3.6-20091214.jar 5)poi-ooxml-schemas-3.6-20091214.jar 6)poi-scratchpad-3.6-20091214.jar Below is the code which i m using: import org.apache.poi.ss.usermodel.Workbook; import org.apache.poi.ss.usermodel.Sheet; import org.apache.poi.ss.usermodel.Row; import org.apache.poi.ss.usermodel.Cell; import org.apache.poi.xssf.usermodel.XSSFWorkbook; import org.apache.poi.xssf.usermodel.XSSFCell; import org.apache.poi.xssf.usermodel.XSSFRow; import java.io.FileInputStream; import java.io.IOException; import java.util.Iterator; import java.util.List; import java.util.ArrayList; public class ReadExcel { public static void main(String[] args) throws Exception { // // An excel file name. You can create a file name with a full path // information. // String filename = "C:\\myExcel.xl"; // // Create an ArrayList to store the data read from excel sheet. // List sheetData = new ArrayList(); FileInputStream fis = null; try { // // Create a FileInputStream that will be use to read the excel file. // fis = new FileInputStream(filename); // // Create an excel workbook from the file system. // // HSSFWorkbook workbook = new HSSFWorkbook(fis); Workbook workbook = new XSSFWorkbook(fis); // // Get the first sheet on the workbook. // Sheet sheet = workbook.getSheetAt(0); // // When we have a sheet object in hand we can iterator on each // sheet's rows and on each row's cells. We store the data read // on an ArrayList so that we can printed the content of the excel // to the console. // Iterator rows = sheet.rowIterator(); while (rows.hasNext()) { Row row = (XSSFRow) rows.next(); Iterator cells = row.cellIterator(); List data = new ArrayList(); while (cells.hasNext()) { Cell cell = (XSSFCell) cells.next(); data.add(cell); } sheetData.add(data); } } catch (IOException e) { e.printStackTrace(); } finally { if (fis != null) { fis.close(); } } showExelData(sheetData); } private static void showExelData(List sheetData) { // // Iterates the data and print it out to the console. // for (int i = 0; i < sheetData.size(); i++) { List list = (List) sheetData.get(i); for (int j = 0; j < list.size(); j++) { Cell cell = (XSSFCell) list.get(j); System.out.print(cell.getRichStringCellValue().getString()); if (j < list.size() - 1) { System.out.print(", "); } } System.out.println(""); } } } Please help. thanks in anticipation, Regards, Dheeraj!

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • Issue with translating a delegate function from c# to vb.net for use with Google OAuth 2

    - by Jeremy
    I've been trying to translate a Google OAuth 2 example from C# to Vb.net for a co-worker's project. I'm having on end of issues translating the following methods: private OAuth2Authenticator<WebServerClient> CreateAuthenticator() { // Register the authenticator. var provider = new WebServerClient(GoogleAuthenticationServer.Description); provider.ClientIdentifier = ClientCredentials.ClientID; provider.ClientSecret = ClientCredentials.ClientSecret; var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; return authenticator; } private IAuthorizationState GetAuthorization(WebServerClient client) { // If this user is already authenticated, then just return the auth state. IAuthorizationState state = AuthState; if (state != null) { return state; } // Check if an authorization request already is in progress. state = client.ProcessUserAuthorization(new HttpRequestInfo(HttpContext.Current.Request)); if (state != null && (!string.IsNullOrEmpty(state.AccessToken) || !string.IsNullOrEmpty(state.RefreshToken))) { // Store and return the credentials. HttpContext.Current.Session["AUTH_STATE"] = _state = state; return state; } // Otherwise do a new authorization request. string scope = TasksService.Scopes.TasksReadonly.GetStringValue(); OutgoingWebResponse response = client.PrepareRequestUserAuthorization(new[] { scope }); response.Send(); // Will throw a ThreadAbortException to prevent sending another response. return null; } The main issue being this line: var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; The Method signature reads as for this particular line reads as follows: Public Sub New(tokenProvider As TClient, authProvider As System.Func(Of TClient, DotNetOpenAuth.OAuth2.IAuthorizationState)) My understanding of Delegate functions in VB.net isn't the greatest. However I have read over all of the MSDN documentation and other relevant resources on the web, but I'm still stuck as to how to translate this particular line. So far all of my attempts have resulted in either the a cast error (see below) or no call to GetAuthorization. The Code (vb.net on .net 3.5) Private Function CreateAuthenticator() As OAuth2Authenticator(Of WebServerClient) ' Register the authenticator. Dim client As New WebServerClient(GoogleAuthenticationServer.Description, oauth.ClientID, oauth.ClientSecret) Dim authDelegate As Func(Of WebServerClient, IAuthorizationState) = AddressOf GetAuthorization Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, authDelegate) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, GetAuthorization(client)) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(Function(c) GetAuthorization(c))) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(AddressOf GetAuthorization)) With {.NoCaching = True} Return authenticator End Function Private Function GetAuthorization(arg As WebServerClient) As IAuthorizationState ' If this user is already authenticated, then just return the auth state. Dim state As IAuthorizationState = AuthState If (Not state Is Nothing) Then Return state End If ' Check if an authorization request already is in progress. state = arg.ProcessUserAuthorization(New HttpRequestInfo(HttpContext.Current.Request)) If (state IsNot Nothing) Then If ((String.IsNullOrEmpty(state.AccessToken) = False Or String.IsNullOrEmpty(state.RefreshToken) = False)) Then ' Store Credentials HttpContext.Current.Session("AUTH_STATE") = state _state = state Return state End If End If ' Otherwise do a new authorization request. Dim scope As String = AnalyticsService.Scopes.AnalyticsReadonly.GetStringValue() Dim _response As OutgoingWebResponse = arg.PrepareRequestUserAuthorization(New String() {scope}) ' Add Offline Access and forced Approval _response.Headers("location") += "&access_type=offline&approval_prompt=force" _response.Send() ' Will throw a ThreadAbortException to prevent sending another response. Return Nothing End Function The Cast Error Server Error in '/' Application. Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. I've spent the better part of a day on this, and it's starting to drive me nuts. Help is much appreciated.

    Read the article

  • Faster method for Matrix vector product for large matrix in C or C++ for use in GMRES

    - by user35959
    I have a large, dense matrix A, and I aim to find the solution to the linear system Ax=b using an iterative method (in MATLAB was the plan using its built in GMRES). For more than 10,000 rows, this is too much for my computer to store in memory, but I know that the entries in A are constructed by two known vectors x and y of length N and the entries satisfy: A(i,j) = .5*(x[i]-x[j])^2+([y[i]-y[j])^2 * log(x[i]-x[j])^2+([y[i]-y[j]^2). MATLAB's GMRES command accepts as input a function call that can compute the matrix vector product A*x, which allows me to handle larger matrices than I can store in memory. To write the matrix-vecotr product function, I first tried this in matlab by going row by row and using some vectorization, but I avoid spawning the entire array A (since it would be too large). This was fairly slow unfortnately in my application for GMRES. My plan was to write a mex file for MATLAB to, which is in C, and ideally should be significantly faster than the matlab code. I'm rather new to C, so this went rather poorly and my naive attempt at writing the code in C was slower than my partially vectorized attempt in Matlab. #include <math.h> #include "mex.h" void Aproduct(double *x, double *ctrs_x, double *ctrs_y, double *b, mwSize n) { mwSize i; mwSize j; double val; for (i=0; i<n; i++) { for (j=0; j<i; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } for (j=i+1; j<n; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } } } The above is the computational portion of the code for the matlab mex file (which is slightly modified C, if I understand correctly). Please note that I skip the case i=j, since in that case the variable val will be a 0*log(0), which should be interpreted as 0 for me, so I just skip it. Is there a more efficient or faster way to write this? When I call this C function via the mex file in matlab, it is quite slow, slower even than the matlab method I used. This surprises me since I suspected that C code should be much faster than matlab. The alternative matlab method which is partially vectorized that I am comparing it with is function Ax = Aprod(x,ctrs) n = length(x); Ax = zeros(n,1); for j=1:(n-3) v = .5*((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2).*log((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2); v(j)=0; Ax(j) = dot(v,x(1:n-3); end (the n-3 is because there is actually 3 extra components, but they are dealt with separately,so I excluded that code). This is partly vectorized and only needs one for loop, so it makes some sense that it is faster. However, I was hoping I could go even faster with C+mex file. Any suggestions or help would be greatly appreciated! Thanks! EDIT: I should be more clear. I am open to any faster method that can help me use GMRES to invert this matrix that I am interested in, which requires a faster way of doing the matrix vector product without explicitly loading the array into memory. Thanks!

    Read the article

  • Dynamically specify the type in C#

    - by Lirik
    I'm creating a custom DataSet and I'm under some constrains: I want the user to specify the type of the data which they want to store. I want to reduce type-casting because I think it will be VERY expensive. I will use the data VERY frequently in my application. I don't know what type of data will be stored in the DataSet, so my initial idea was to make it a List of objects, but I suspect that the frequent use of the data and the need to type-cast will be very expensive. The basic idea is this: class DataSet : IDataSet { private Dictionary<string, List<Object>> _data; /// <summary> /// Constructs the data set given the user-specified labels. /// </summary> /// <param name="labels"> /// The labels of each column in the data set. /// </param> public DataSet(List<string> labels) { _data = new Dictionary<string, List<object>>(); foreach (string label in labels) { _data.Add(label, new List<object>()); } } #region IDataSet Members public List<string> DataLabels { get { return _data.Keys.ToList(); } } public int Count { get { _data[_data.Keys[0]].Count; } } public List<object> GetValues(string label) { return _data[label]; } public object GetValue(string label, int index) { return _data[label][index]; } public void InsertValue(string label, object value) { _data[label].Insert(0, value); } public void AddValue(string label, object value) { _data[label].Add(value); } #endregion } A concrete example where the DataSet will be used is to store data obtained from a CSV file where the first column contains the labels. When the data is being loaded from the CSV file I'd like to specify the type rather than casting to object. The data could contain columns such as dates, numbers, strings, etc. Here is what it could look like: "Date","Song","Rating","AvgRating","User" "02/03/2010","Code Monkey",4.6,4.1,"joe" "05/27/2009","Code Monkey",1.2,4.5,"jill" The data will be used in a Machine Learning/Artificial Intelligence algorithm, so it is essential that I make the reading of data very fast. I want to eliminate type-casting as much as possible, since I can't afford to cast from 'object' to whatever data type is needed on every read. I've seen applications that allow the user to pick the specific data type for each item in the csv file, so I'm trying to make a similar solution where a different type can be specified for each column. I want to create a generic solution so I don't have to return a List<object> but a List<DateTime> (if it's a DateTime column) or List<double> (if it's a column of doubles). Is there any way that this can be achieved? Perhaps my approach is wrong, is there a better approach to this problem?

    Read the article

  • Input not cleared.

    - by SoulBeaver
    As the question says, for some reason my program is not flushing the input or using my variables in ways that I cannot identify at the moment. This is for a homework project that I've gone beyond what I had to do for it, now I just want the program to actually work :P Details to make the finding easier: The program executes flawlessly on the first run through. All throws work, only the proper values( n 0 ) are accepted and turned into binary. As soon as I enter my terminate input, the program goes into a loop and only asks for the termiante again like so: When I run this program on Netbeans on my Linux Laptop, the program crashes after I input the terminate value. On Visual C++ on Windows it goes into the loop like just described. In the code I have tried to clear every stream and initialze every variable new as the program restarts, but to no avail. I just can't see my mistake. I believe the error to lie in either the main function: int main( void ) { vector<int> store; int terminate = 1; do { int num = 0; string input = ""; if( cin.fail() ) { cin.clear(); cin.ignore( numeric_limits<streamsize>::max(), '\n' ); } cout << "Please enter a natural number." << endl; readLine( input, num ); cout << "\nThank you. Number is being processed..." << endl; workNum( num, store ); line; cout << "Go again? 0 to terminate." << endl; cin >> terminate // No checking yet, just want it to work! cin.clear(); }while( terminate ); cin.get(); return 0; } or in the function that reads the number: void readLine( string &input, int &num ) { int buf = 1; stringstream ss; vec_sz size; if( ss.fail() ) { ss.clear(); ss.ignore( numeric_limits<streamsize>::max(), '\n' ); } if( getline( cin, input ) ) { size = input.size(); for( int loop = 0; loop < size; ++loop ) if( isalpha( input[loop] ) ) throw domain_error( "Invalid Input." ); ss << input; ss >> buf; if( buf <= 0 ) throw domain_error( "Invalid Input." ); num = buf; ss.clear(); } }

    Read the article

  • Magento Admin Custom Sales Report

    - by Ela
    Hi, I have to develop a module to export collection of product,order,customer combined attributes. So i thought rather than modifying the core sales report for this purpose better to do a custom functionality. These are the steps that i did but i am not able to produce it. Used magento 1.4.1 version for this. Under /var/www/magento141/app/code/core/Mage/Reports/etc/adminhtml.xml Added these lines for menus. <ereaders translate="title" module="reports"> <title>EReader Sales Report</title> <children> <ereaders translate="title" module="reports"> <title>Sales Report</title> <action>adminhtml/report_sales/ereaders</action> </ereaders> </children> </ereaders> Under /var/www/magento141/app/design/adminhtml/default/default/layout/sales.xml Added these lines for filter condition. <adminhtml_report_sales_ereaders> <update handle="report_sales"/> <reference name="content"> <block type="adminhtml/report_sales_sales" template="report/grid/container.phtml" name="sales.report.grid.container"> <block type="adminhtml/store_switcher" template="report/store/switcher/enhanced.phtml" name="store.switcher"> <action method="setStoreVarName"><var_name>store_ids</var_name></action> </block> <block type="sales/adminhtml_report_filter_form_order" name="grid.filter.form"> ---- </block> </block> </reference> </adminhtml_report_sales_ereaders> And then copied the needed block,model files from sales and renamed all of them into ereaders under /var/www/magento141/app/code/core/Mage/Adminhtml/. Then placed action for ereaders under /var/www/magento141/app/code/core/Mage/Adminhtml/controllers/Report/SalesController.php public function ereadersAction() { $this->_title($this->__('Reports'))->_title($this->__('Sales'))->_title($this->__('EReaders Sales')); $this->_showLastExecutionTime(Mage_Reports_Model_Flag::REPORT_ORDER_FLAG_CODE, 'ereaders'); $this->_initAction() ->_setActiveMenu('report/sales/ereaders') ->_addBreadcrumb(Mage::helper('adminhtml')->__('EReaders Sales Report'), Mage::helper('adminhtml')->__('EReaders Sales Report')); $gridBlock = $this->getLayout()->getBlock('report_sales_ereaders.grid'); $filterFormBlock = $this->getLayout()->getBlock('grid.filter.form'); $this->_initReportAction(array( $gridBlock, $filterFormBlock )); $this->renderLayout(); } Here when i use var_dump == //var_dump($this-getLayout()-getBlock('report_sales_ereaders.grid')); am getting bool(false) only. It does not call the ereaders grid, instead of its still loading blocks and grids from Sales only. I searched most of the files related to report, am not still able to find out the problem. Hope many of you gone through these sort of issues, please can anyone tell me where am making mistake or missing something.

    Read the article

  • nhibernate cascade - problem with detached entities

    - by Chev
    I am going nuts here trying to resolve a cascading update/delete issue :-) I have a Parent Entity with a collection Child Entities. If I modify the list of Child entities in a detached Parent object, adding, deleting etc - I am not seeing the updates cascaded correctly to the Child collection. Mapping Files: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Domain" namespace="Domain"> <class name="Parent" table="Parent" > <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> <bag name="ParentChildren" lazy="false" table="Parent_Children" cascade="all-delete-orphan" inverse="true"> <key column="ParentId" on-delete="cascade" /> <one-to-many class="ParentChildren" /> </bag> </class> <class name="ParentChildren" table="Parent_Children"> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <many-to-one name="Parent" class="Parent" column="ParentId" lazy="false" not-null="true" /> </class> </hibernate-mapping> Test [Test] public void Test() { Guid id; int lastModified; // add a child into 1st session then detach using(ISession session = Store.Local.Get<ISessionFactory>("SessionFactory").OpenSession()) { Console.Out.WriteLine("Selecting..."); Parent parent = (Parent) session.Get(typeof (Parent), new Guid("4bef7acb-bdae-4dd0-ba1e-9c7500f29d47")); id = parent.Id; lastModified = parent.LastModified + 1; // ensure the detached version used later is equal to the persisted version Console.Out.WriteLine("Adding Child..."); Child child = (from c in session.Linq<Child>() select c).First(); parent.AddChild(child, 0m); session.Flush(); session.Dispose(); // not needed i know } // attach a parent, then save with no Children using (ISession session = Store.Local.Get<ISessionFactory>("SessionFactory").OpenSession()) { Parent parent = new Parent("Test"); parent.Id = id; parent.LastModified = lastModified; session.Update(parent); session.Flush(); } } I assume that the fact that the product has been updated to have no children in its collection - the children would be deleted in the Parent_Child table. The problems seems to be something to do with attaching the Product to the new session? As the cascade is set to all-delete-orphan I assume that changes to the collection would be propagated to the relevant entities/tables? In this case deletes? What am I missing here? C

    Read the article

  • Why do some Flask session values disappear from the session after closing the browser window, but then reappear later without me adding them?

    - by Ben
    So my understanding of Flask sessions is that I can use it like a dictionary and add values to a session by doing: session['key name'] = 'some value here' And that works fine. On a route I have the client call using AJAX post, I assign a value to the session. And it works fine. I can click on various pages of my site and the value stays in the session. If I close the browser window however, and then go back to my site, the session value I had in there is gone. So that's weird and you would think the problem is the session isn't permanent. I also implemented Flask-Openid and that uses the session to store information and that does persist if I close the browser window and open it back up again. I also checked the cookie after closing the browser window, but before going back to my site, and the cookie is indeed still there. Another odd piece of behaviour (which may be related) is that some values I have written to the session for testing purposes will go away when I access the AJAX post route and assign the correct value. So that is odd, but what is truly weird is that when I then close the browser window and open it up again, and have thus lost the value I was trying to retain, the ones that I lost previously actually return! They aren't being reassigned because there's no code in my Python files to reassign those values. Here is some outputs to helper make it clearer. They are all outputed from a route for a real page, and not the AJAX post route I mentioned above. This is the output after I have assigned the value I want to store in the session. The value key is 'userid' - all the other values are dummy ones I have added in trying to solve this problem. 'userid': 8 will stay in the session as long as I don't close the browser window. I can access other routes and the value will stay there just like it should. ['session.=', <SecureCookieSession {'userid': 8, 'test_variable_num': 102, 'adding using before request': 'hi', '_permanent': True, 'test_variable_text': 'hi!'}>] If I do close the browser window, and go back into the site, but without redoing the AJAX post request, I get this output: ['session.=', <SecureCookieSession {'adding using before request': 'hi', '_permanent': True, 'yo': 'yo'}>] The 'yo' value was not in the first first output. I don't know where it came from. I searched my code for 'yo' and there is no instances of me assigning that value anywhere. I think I may have added it to the session days ago. So it seems like it is persisting, but being hidden when the other values are written. And this last one is me accessing the AJAX post route again, and then going to the page that prints out the keys using debug. Same output as the first output I pasted above, which you would expect, and the 'yo' value is gone again (but it will come back if I close the browser window) ['session.=', <SecureCookieSession {'userid': 8, 'test_variable_num': 102, 'adding using before request': 'hi', '_permanent': True, 'test_variable_text': 'hi!'}>] I tested this in both Chrome and Firefox. So I find this all weird and I am guessing it stems from a misunderstanding of how sessions work. I think they're dictionaries and I can write dictionary values into them and retrieve them days later as long as I set the session to permanent and the cookie doesn't get deleted. Any ideas why this weird behaviour is happening?

    Read the article

  • Php template caching design

    - by Thomas
    Hello to all, I want to include caching in my app design. Caching templates for starters. The design I have used so far is very modular. I have created an ORM implementation for all my tables and each table is represented by the corresponding class. All the requests are handled by one controller which routes them to the appropriate webmethod functions. I am using a template class for handling UI parts. What I have in mind for caching includes the implementation of a separate Cache class for handling caching with the flexibility to either store in files, apc or memcache. Right now I am testing with file caching. Some thoughts Should I include the logic of checking for cached versions in the Template class or in the webmethods which handle the incoming requests and which eventually call the Template class. In the first case, things are pretty simple as I will not have to change anything more than pass the template class an extra argument (whether to load from cache or not). In the second case however, I am thinking of checking for a cached version immediately in the webmethod and if found return it. This will save all the processing done until the logic reaches the template (first case senario). Both senarios however, rely on an accurate mechanism of invalidating caches, which brings as to Invalidating caches As I see it (and you can add your input freely) a template cached file, becomes invalidate if: a. the expiration set, is reached. b. the template file itself is updated (ie by the developer when adding a new line) c. the webmethod that handles the request changes (ie the developer adds/deletes something in the code) d. content coming from the db and ending in the template file is modified I am thinking of storing a json encoded array inside the cached file. The first value will be the expiration timestamp of the cache. The second value will be the modification time of the php file with the code handling the request (to cope with option c above) The third will be the content itself The validation process I am considering, according to the above senarios, is: a. If the expiration of the cached file (stored in the array) is reached, delete the cache file b. if the cached file's mod time is smaller than the template's skeleton file mod time, delete the cached file c. if the mod time of the php file is greated than the one stored in the cache, delete the cached file. d. This is tricky. In the ORM implementation I ahve added event handlers (which fire when adding, updating, deleting objects). I could delete the cache file every time an object thatprovides content to the template, is modified. The problem is how to keep track which cached files correpond to each schema object. Take this example, a user has his shortprofile page and a full profile page (2 templates) These templates can be cached. Now, every time the user modifies his profile, the event handler would need to know which templates or cached files correspond to the User, so that these files can be deleted. I could store them in the db but I am looking for a beter approach

    Read the article

  • Advanced Regex: Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URLs to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - start parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you'd like to cover even more links, but I've decided not to because I can't think of a link that would start with some obscure character) 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces were present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}" I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like my links to be replaced with shortenings as well. So when user copies a very long link (for instance if they would copy a link from google maps that usually generates long links) I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. I could as well append ellipsis at the end of at all possible (and make things even more perfect). Does Regex.Replace() method support replacement notations so that I can still use a single call? Something similar as string.Format() method does when you'd like to format values in string format (decimals, dates etc...).

    Read the article

  • Why doe my UITableView only show two rows of each section?

    - by Mike Owens
    I have a UITableView and when I build it only two rows will be displayed. Each section has more than two cells to be displayed, I am confused since they are all done the same?`#import #import "Store.h" import "VideoViewController.h" @implementation Store @synthesize listData; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [self createTableData]; [super viewDidLoad]; } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } (void)viewDidUnload { //self.listData = nil; //[super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } pragma mark - pragma mark Table View Data Source Methods // Customize the number of sections in the table view. - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [videoSections count]; } //Get number of rows -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [self.listData count]; } -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *StoreTableIdentifier = @"StoreTableIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:StoreTableIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:StoreTableIdentifier] autorelease]; } cell.textLabel.text = [[[listData objectAtIndex:indexPath.section] objectAtIndex:indexPath.row] objectForKey:@"name"]; //Change font and color of tableView cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; cell.textLabel.font=[UIFont fontWithName:@"Georgia" size:16.0]; cell.textLabel.textColor = [UIColor brownColor]; return cell; } -(NSString *)tableView: (UITableView *)tableView titleForHeaderInSection: (NSInteger) section { return [videoSections objectAtIndex:section]; } -(void)tableView: (UITableView *)tableView didSelectRowAtIndexPath: (NSIndexPath *)indexPath { VideoViewController *videoViewController = [[VideoViewController alloc] initWithNibName: @"VideoViewController" bundle:nil]; videoViewController.detailURL = [[NSURL alloc] initWithString: [[[listData objectAtIndex:indexPath.section] objectAtIndex:indexPath.row] objectForKey:@"url"]]; videoViewController.title = [[[listData objectAtIndex:indexPath.section] objectAtIndex:indexPath.row] objectForKey:@"name"]; [self.navigationController pushViewController:videoViewController animated:YES]; [videoViewController release]; } pragma mark Table View Methods //Data in table cell -(void) createTableData { NSMutableArray *beginningVideos; NSMutableArray *intermediateVideos; videoSections = [[NSMutableArray alloc] initWithObjects: @"Beginning Videos", @"Intermediate Videos", nil]; beginningVideos = [[NSMutableArray alloc] init]; intermediateVideos = [[NSMutableArray alloc] init]; [beginningVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Shirts", @"name", @"http://www.andalee.com/iPhoneVideos/testMovie.m4v", @"url", nil]]; [beginningVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Posters", @"name", @"http://devimages.apple.com/iphone/samples/bipbopall.html", @"url", nil]]; [beginningVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Stickers",@"name", @"http://www.andalee.com/iPhoneVideos/mov.MOV",@"url",nil]]; [beginningVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Egyptian",@"name", @"http://www.andalee.com/iPhoneVideos/2ndMovie.MOV",@"url",nil]]; [intermediateVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Drum Solo", @"name", @"http://www.andalee.com", @"url", nil]]; [intermediateVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Veil", @"name", @"http://www.andalee.com", @"url", nil]]; [intermediateVideos addObject:[[NSMutableDictionary alloc] initWithObjectsAndKeys:@"Three Quarter Shimmy",@"name", @"http://www.andalee.com", @"url",nil]]; listData = [[NSMutableArray alloc] initWithObjects:beginningVideos, intermediateVideos, nil]; [beginningVideos release]; [intermediateVideos release]; } (void)dealloc { [listData release]; [videoSections release]; [super dealloc]; } @end `

    Read the article

  • Partial string search in boost::multi_index_container

    - by user361699
    I have a struct to store info about persons and multi_index_contaider to store such objects struct person { std::string m_first_name; std::string m_last_name; std::string m_third_name; std::string m_address; std::string m_phone; person(); person(std::string f, std::string l, std::string t = "", std::string a = DEFAULT_ADDRESS, std::string p = DEFAULT_PHONE) : m_first_name(f), m_last_name(l), m_third_name(t), m_address(a), m_phone(p) {} }; typedef multi_index_container , ordered_non_unique, member, member persons_set; operator< and operator<< implementation for person bool operator<(const person &lhs, const person &rhs) { if(lhs.m_last_name == rhs.m_last_name) { if(lhs.m_first_name == rhs.m_first_name) return (lhs.m_third_name < rhs.m_third_name); return (lhs.m_first_name < rhs.m_first_name); } return (lhs.m_last_name < rhs.m_last_name); } std::ostream& operator<<(std::ostream &s, const person &rhs) { s << "Person's last name: " << rhs.m_last_name << std::endl; s << "Person's name: " << rhs.m_first_name << std::endl; if (!rhs.m_third_name.empty()) s << "Person's third name: " << rhs.m_third_name << std::endl; s << "Phone: " << rhs.m_phone << std::endl; s << "Address: " << rhs.m_address << std::endl; return s; } Add several persons into container: person ("Alex", "Johnson", "Somename"); person ("Alex", "Goodspeed"); person ("Petr", "Parker"); person ("Petr", "Goodspeed"); Now I want to find person by lastname (the first member of the second index in multi_index_container) persons_set::nth_index<1::type &names_index = my_set.get<1(); std::pair::type::const_iterator, persons_set::nth_index<1::type::const_iterator n_it = names_index.equal_range("Goodspeed"); std::copy(n_it.first ,n_it.second, std::ostream_iterator(std::cout)); It works great. Both 'Goodspeed' persons are found. Now lets try to find person by a part of a last name: std::pair::type::const_iterator, persons_set::nth_index<1::type::const_iterator n_it = names_index.equal_range("Good"); std::copy(n_it.first ,n_it.second, std::ostream_iterator(std::cout)); This returns nothing, but partial string search works as a charm in std::set. So I can't realize what's the problem. I only wraped strings by a struct. May be operator< implementation? Thanks in advance for any help.

    Read the article

  • Self-updating collection concurrency issues

    - by DEHAAS
    I am trying to build a self-updating collection. Each item in the collection has a position (x,y). When the position is changed, an event is fired, and the collection will relocate the item. Internally the collection is using a “jagged dictionary”. The outer dictionary uses the x-coordinate a key, while the nested dictionary uses the y-coordinate a key. The nested dictionary then has a list of items as value. The collection also maintains a dictionary to store the items position as stored in the nested dictionaries – item to stored location lookup. I am having some trouble making the collection thread safe, which I really need. Source code for the collection: public class PositionCollection<TItem, TCoordinate> : ICollection<TItem> where TItem : IPositionable<TCoordinate> where TCoordinate : struct, IConvertible { private readonly object itemsLock = new object(); private readonly Dictionary<TCoordinate, Dictionary<TCoordinate, List<TItem>>> items; private readonly Dictionary<TItem, Vector<TCoordinate>> storedPositionLookup; public PositionCollection() { this.items = new Dictionary<TCoordinate, Dictionary<TCoordinate, List<TItem>>>(); this.storedPositionLookup = new Dictionary<TItem, Vector<TCoordinate>>(); } public void Add(TItem item) { if (item.Position == null) { throw new ArgumentException("Item must have a valid position."); } lock (this.itemsLock) { if (!this.items.ContainsKey(item.Position.X)) { this.items.Add(item.Position.X, new Dictionary<TCoordinate, List<TItem>>()); } Dictionary<TCoordinate, List<TItem>> xRow = this.items[item.Position.X]; if (!xRow.ContainsKey(item.Position.Y)) { xRow.Add(item.Position.Y, new List<TItem>()); } xRow[item.Position.Y].Add(item); if (this.storedPositionLookup.ContainsKey(item)) { this.storedPositionLookup[item] = new Vector<TCoordinate>(item.Position); } else { this.storedPositionLookup.Add(item, new Vector<TCoordinate>(item.Position)); // Store a copy of the original position } item.Position.PropertyChanged += (object sender, PropertyChangedEventArgs eventArgs) => this.UpdatePosition(item, eventArgs.PropertyName); } } private void UpdatePosition(TItem item, string propertyName) { lock (this.itemsLock) { Vector<TCoordinate> storedPosition = this.storedPositionLookup[item]; this.RemoveAt(storedPosition, item); this.storedPositionLookup.Remove(item); } } } I have written a simple unit test to check for concurrency issues: [TestMethod] public void TestThreadedPositionChange() { PositionCollection<Crate, int> collection = new PositionCollection<Crate, int>(); Crate crate = new Crate(new Vector<int>(5, 5)); collection.Add(crate); Parallel.For(0, 100, new Action<int>((i) => crate.Position.X += 1)); Crate same = collection[105, 5].First(); Assert.AreEqual(crate, same); } The actual stored position varies every time I run the test. I appreciate any feedback you may have.

    Read the article

  • How can I add a previous button to this Jquery Content Slider?

    - by user1269988
    I did this nice tutorial for a Jquery Content Slider: http://brenelz.com/blog/build-a-content-slider-with-jquery/ Here is my test page: http://www.gregquinn.com/oneworld/brenez_slider_test.html But the Left button is hidden on the first slide and I do not want it to be. I don't know much about jquery but I tried to set the left button from opacity o to 100 or 1 and it didn't work the button showed up once but did not work. Does anyone know how to do this? Here is the code: (function($) { $.fn.ContentSlider = function(options) { var defaults = { leftBtn : 'images/panel_previous_btn.gif', rightBtn : 'images/panel_next_btn.gif', width : '900px', height : '400px', speed : 400, easing : 'easeOutQuad', textResize : false, IE_h2 : '26px', IE_p : '11px' } var defaultWidth = defaults.width; var o = $.extend(defaults, options); var w = parseInt(o.width); var n = this.children('.cs_wrapper').children('.cs_slider').children('.cs_article').length; var x = -1*w*n+w; // Minimum left value var p = parseInt(o.width)/parseInt(defaultWidth); var thisInstance = this.attr('id'); var inuse = false; // Prevents colliding animations function moveSlider(d, b) { var l = parseInt(b.siblings('.cs_wrapper').children('.cs_slider').css('left')); if(isNaN(l)) { var l = 0; } var m = (d=='left') ? l-w : l+w; if(m<=0&&m>=x) { b .siblings('.cs_wrapper') .children('.cs_slider') .animate({ 'left':m+'px' }, o.speed, o.easing, function() { inuse=false; }); if(b.attr('class')=='cs_leftBtn') { var thisBtn = $('#'+thisInstance+' .cs_leftBtn'); var otherBtn = $('#'+thisInstance+' .cs_rightBtn'); } else { var thisBtn = $('#'+thisInstance+' .cs_rightBtn'); var otherBtn = $('#'+thisInstance+' .cs_leftBtn'); } if(m==0||m==x) { thisBtn.animate({ 'opacity':'0' }, o.speed, o.easing, function() { thisBtn.hide(); }); } if(otherBtn.css('opacity')=='0') { otherBtn.show().animate({ 'opacity':'1' }, { duration:o.speed, easing:o.easing }); } } } function vCenterBtns(b) { // Safari and IE don't seem to like the CSS used to vertically center // the buttons, so we'll force it with this function var mid = parseInt(o.height)/2; b .find('.cs_leftBtn img').css({ 'top':mid+'px', 'padding':0 }).end() .find('.cs_rightBtn img').css({ 'top':mid+'px', 'padding':0 }); } return this.each(function() { $(this) // Set the width and height of the div to the defined size .css({ width:o.width, height:o.height }) // Add the buttons to move left and right .prepend('<a href="#" class="cs_leftBtn"><img src="'+o.leftBtn+'" /></a>') .append('<a href="#" class="cs_rightBtn"><img src="'+o.rightBtn+'" /></a>') // Dig down to the article div elements .find('.cs_article') // Set the width and height of the div to the defined size .css({ width:o.width, height:o.height }) .end() // Animate the entrance of the buttons .find('.cs_leftBtn') .css('opacity','0') .hide() .end() .find('.cs_rightBtn') .hide() .animate({ 'width':'show' }); // Resize the font to match the bounding box if(o.textResize===true) { var h2FontSize = $(this).find('h2').css('font-size'); var pFontSize = $(this).find('p').css('font-size'); $.each(jQuery.browser, function(i) { if($.browser.msie) { h2FontSize = o.IE_h2; pFontSize = o.IE_p; } }); $(this).find('h2').css({ 'font-size' : parseFloat(h2FontSize)*p+'px', 'margin-left' : '66%' }); $(this).find('p').css({ 'font-size' : parseFloat(pFontSize)*p+'px', 'margin-left' : '66%' }); $(this).find('.readmore').css({ 'font-size' : parseFloat(pFontSize)*p+'px', 'margin-left' : '66%' }); } // Store a copy of the button in a variable to pass to moveSlider() var leftBtn = $(this).children('.cs_leftBtn'); leftBtn.bind('click', function() { if(inuse===false) { inuse = true; moveSlider('right', leftBtn); } return false; // Keep the link from firing }); // Store a copy of the button in a variable to pass to moveSlider() var rightBtn = $(this).children('.cs_rightBtn'); rightBtn.bind('click', function() { if(inuse===false) { inuse=true; moveSlider('left', rightBtn); } return false; // Keep the link from firing }); }); } })(jQuery)

    Read the article

  • Parse filename, insert to SQL

    - by jakesankey
    Thanks to Code Poet, I am now working off of this code to parse all .txt files in a directory and store them in a database. I need a bit more help though... The file names are R303717COMP_148A2075_20100520.txt (the middle section is unique per file). I would like to add something to code so that it can parse out the R303717COMP and put that in the left column of the database such as: (this is not the only R number we have) R303717COMP data data data R303717COMP data data data R303717COMP data data data etc Lastly, I would like to have it store each full file name into another table that gets checked so that it doesn't get processed twice.. Any Help is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; namespace CSVImport { internal class Program { private static void Main(string[] args) { using (SQLiteConnection con = new SQLiteConnection("data source=data.db3")) { if (!File.Exists("data.db3")) { con.Open(); using (SQLiteCommand cmd = con.CreateCommand()) { cmd.CommandText = @" CREATE TABLE [Import] ( [RowId] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [FeatType] varchar, [FeatName] varchar, [Value] varchar, [Actual] decimal, [Nominal] decimal, [Dev] decimal, [TolMin] decimal, [TolPlus] decimal, [OutOfTol] decimal, [Comment] nvarchar);"; cmd.ExecuteNonQuery(); } con.Close(); } con.Open(); using (SQLiteCommand insertCommand = con.CreateCommand()) { insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, Comment) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @Comment);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string[] files = Directory.GetFiles(Environment.CurrentDirectory, "TextFile*.*"); foreach (string file in files) { string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] {','}); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.ExecuteNonQuery(); } } } con.Close(); } } } }

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >