Search Results

Search found 4012 results on 161 pages for 'concept hierarchy'.

Page 131/161 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Pros/cons of reading connection string from physical file vs Application object (ASP.NET)?

    - by HaterTot
    my ASP.NET application reads an xml file to determine which environment it's currently in (e.g. local, development, production). It checks this file every single time it opens a connection to the database, in order to know which connection string to grab from the Application Settings. I'm entering a phase of development where efficiency is becoming a concern. I don't think it's a good idea to have to read a file on a physical disk ever single time I wish to access the database (very often). I was considering storing the connection string in Application["ConnectionString"]. So the code would be public static string GetConnectionString { if (Application["ConnectionString"] == null) { XmlDocument doc = new XmlDocument(); doc.Load(HttpContext.Current.Request.PhysicalApplicationPath + "bin/ServerEnvironment.xml"); XmlElement xe = (XmlElement) xnl[0]; switch (xe.InnerText.ToString().ToLower()) { case "local": connString = Settings.Default.ConnectionStringLocal; break; case "development": connString = Settings.Default.ConnectionStringDevelopment; break; case "production": connString = Settings.Default.ConnectionStringProduction; break; default: throw new Exception("no connection string defined"); } Application["ConnectionString"] = connString; } return Application["ConnectionString"].ToString(); } I didn't design the application so I figure there must have been a reason for reading the xml file every time (to change settings while the application runs?) I have very little concept of the inner workings here. What are the pros and cons? Do you think I'd see a small performance gain by implementing the function above? THANKS

    Read the article

  • Text message intent - catch and send

    - by Espen
    Hi! I want to be able to control incoming text messages. My application is still on a "proof of concept" version and I'm trying to learn Android programming as I go. First my application need to catch incomming text messages. And if the message is from a known number then deal with it. If not, then send the message as nothing has happened to the default text message application. I have no doubt it can be done, but I still have some concern and I see some pitfalls at how things are done on Android. So getting the incomming text message could be fairly easy - except when there are other messaging applications installed and maybe the user wants to have normal text messages to pop up on one of them - and it will, after my application has had a look at it first. How to be sure my application get first pick of incomming text messages? And after that I need to send most text messages through to any other text message application the user has chosen so the user can actually read the message my application didn't need. Since Android uses intents that are relative at best, I don't see how I can enforce my application to get a peek at all incomming text messages, and then stop it or send it through to the default text messaging application...

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Dependency Property ListBox

    - by developer
    Hi All, I want to use a dependency property, so that my label displays values selected in the listbox. This is just to more clearly understand the working of a dependency property. <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:WPFToolkit="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" xmlns:local="clr-namespace:WpfApplication1" x:Name="MyWindow" Height="200" Width="300" > <StackPanel> <ListBox x:Name="lbColor" Width="248" Height="56" ItemsSource="{Binding TestColor}"/> <StackPanel> <Label Content="{Binding Path=Test, ElementName=lbColor}" /> </StackPanel> </StackPanel> </Window> Code Behind, namespace WpfApplication1 { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { public ObservableCollection<string> TestColor { get; set; } public String Test { get { return (String)GetValue(TestProperty); } set { SetValue(TestProperty, value); } } // Using a DependencyProperty as the backing store for Title. This enables animation, styling, binding, etc... public static readonly DependencyProperty TestProperty = DependencyProperty.Register("Test", typeof(String), typeof(ListBox), new UIPropertyMetadata("Test1")); public Window1() { InitializeComponent(); TestColor = new ObservableCollection<string>(); DataContext = this; TestColor.Add("Red"); TestColor.Add("Orange"); TestColor.Add("Yellow"); TestColor.Add("Green"); TestColor.Add("Blue"); } } } Can anyone explain me how will I accompalish this using a dependency property. Somehow I am very confused with the Dependency Property concept, and I just wanted to see a working example for that.

    Read the article

  • Best ways to copyright protect your work for example Myows

    - by Saif Bechan
    Recently I have read about Myows, they say its: "The universal copyright management and protection app for smart creatives" It is used to protect your application from copyrights and more. Do you think this will be a good idea for large application, or are there better ways to achieve such a thing. url: Myows Update Referencing: http://stackoverflow.com/questions/2618015/has-anyone-tried-myows-to-copyright-protect-your-work/2618628#2618628 Wow there is no better person to have answered this question like the creator himself. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so 'new'. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it will works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Using Silverlight for Views in ASP.Net MVC - a bad idea?

    - by bplus
    I'm currently writing a small application for use internally at my office. I started out teaching myself some MVC (I've been a C# dev for 3 years). One of the main requirements is editable grids - I quickly realised that silverlight (i have zero silverlight experience) could be a big help in this. I've managed to create a proof of concept of getting MVC and silverlight to talk back an forth by combining these two techniques: Creating a Rest API using MVC MVC SilverLight I also got some help on stackoverflow: silverlight-grids-mvc-http-post Essentially all I'm doing is embedding a silver light object in a view. Serializing the Model data as JSON and passing it to silverlight(using intit params written into the response). The silverlight object can post data back to the controller as JSON. So far this seems like it could work quite well. However I am a bit concerned that I could be painting myself into a corner with this approach, as in I don't have much experience with either technology so I'm worried I'm going get hit with something further down the line that I won't be able to work around. Has anybody else tried doing this? Any advice would be much appreciated!

    Read the article

  • Combining aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using using a domain language to query the DB. The 'language' comprises of boolean expressions (or more specifically SQL like criteria) which I then 'translate' back into pure (ANSI) SQL and send to the underlying Db. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • How would you go about parsing markdown?

    - by John Leidegren
    You can find the syntax here. The thing is, the source that follows with the download is written in perl. Which I have no intentions of honoring. It is riddled with regex and it relies on MD5 hashes to escape certain characters. Something is just wrong about that! I'm about to hard code a parser for markdown and I'm wonder if someone had some experience with this? Edit: If you don't have anything meaningful to say about the actual parsing of markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution i.e. third-party library). To help a bit with the answers, regex are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar. If you think about markdown, it's fundamentally based around the concept of paragraphs. As such, a reasonable approach might be to split the input into paragraphs. There are many kinds of paragraphs e.g. heading, text, list, blockquote, code. The challenge is thus to identify these paragraphs and in what context they occur. I'll be back with a solution, once I find it's worthy to be shared.

    Read the article

  • How can I create a new Person object correctly in Javascript?

    - by TimDog
    I'm still struggling with this concept. I have two different Person objects, very simply: ;Person1 = (function() { function P (fname, lname) { P.FirstName = fname; P.LastName = lname; return P; } P.FirstName = ''; P.LastName = ''; var prName = 'private'; P.showPrivate = function() { alert(prName); }; return P; })(); ;Person2 = (function() { var prName = 'private'; this.FirstName = ''; this.LastName = ''; this.showPrivate = function() { alert(prName); }; return function(fname, lname) { this.FirstName = fname; this.LastName = lname; } })(); And let's say I invoke them like this: var s = new Array(); //Person1 s.push(new Person1("sal", "smith")); s.push(new Person1("bill", "wonk")); alert(s[0].FirstName); alert(s[1].FirstName); s[1].showPrivate(); //Person2 s.push(new Person2("sal", "smith")); s.push(new Person2("bill", "wonk")); alert(s[2].FirstName); alert(s[3].FirstName); s[3].showPrivate(); The Person1 set alerts "bill" twice, then alerts "private" once -- so it recognizes the showPrivate function, but the local FirstName variable gets overwritten. The second Person2 set alerts "sal", then "bill", but it fails when the showPrivate function is called. The new keyword here works as I'd expect, but showPrivate (which I thought was a publicly exposed function within the closure) is apparently not public. I want to get my object to have distinct copies of all local variables and also expose public methods -- I've been studying closures quite a bit, but I'm still confused on this one. Thanks for your help.

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

  • Diamond problem in C++

    - by Jack
    I know the diamond problem. I am using gcc compiler. I have some scenarios I need explanation about. 1) class A{ public: virtual void eat(){cout<<"A eat\n";} }; class B:public A{ public: void eat(){ cout<<"B eat\n";}}; class C:public A{ public: void eat(){ cout<<"C eat\n";}}; class D:public B,C{ public: void eat(){ cout<<"D eat\n";}}; int main() { A * a = new D(); a->eat(); getch(); return 0; } Why doesn't this work? 2) class A{ public: void eat(){cout<<"A eat\n";} }; class B:virtual public A{ public: void eat(){ cout<<"B eat\n";}}; class C:virtual public A{ public: void eat(){ cout<<"C eat\n";}}; class D: public B,C{ public: void eat(){ cout<<"D eat\n";}}; int main() { A * a = new D(); a->eat(); getch(); return 0; } When I do this what happens in the background. How does the ambiguity get removed. Is the concept of vtables involved here?

    Read the article

  • Partial Classes - are they bad design?

    - by dferraro
    Hello, I'm wondering why the 'partial class' concept even exists in .NET. I'm working on an application and we are reading a (actually very good) book relavant to the development platform we are implementing at work. In the book he provides a large code base /wrapper around the platform API and explains how he developed it as he teaches different topics about the platform development. Anyway, long story short - he uses partial classes, all over the place, as a way to fake multiple inheritence in C# (IMO). Why he didnt just split the classes up into multiple ones and use composition is beyond me. He will have 3 'partial class' files to make up his base class, each w/ 3-500 lines of code... And does this several times in his API. Do you find this justifiable? If it were me, I'd have followed the S.R.P. and created multiple classes to handle different required behaviors, then create a base class that has instances of these classes as members (e.g. composition). Why did MS even put partial class into the framework?? They removed the ability to expand/collapse all code at each scope level in C# (this was allowed in C++) because it was obviously just allowing bad habits - partial class is IMO the same thing. I guess my quetion is: Can you explain to me when there would be a legitimate reason to ever use a partial class? I do not mean this to be a rant / war thread. I'm honeslty looking to learn something here. Thanks

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • Unboxing to unknown type

    - by Robert
    I'm trying to figure out syntax that supports unboxing an integral type (short/int/long) to its intrinsic type, when the type itself is unknown. Here is a completely contrived example that demonstrates the concept: // Just a simple container that returns values as objects struct DataStruct { public short ShortVale; public int IntValue; public long LongValue; public object GetBoxedShortValue() { return LongValue; } public object GetBoxedIntValue() { return LongValue; } public object GetBoxedLongValue() { return LongValue; } } static void Main( string[] args ) { DataStruct data; // Initialize data - any value will do data.LongValue = data.IntValue = data.ShortVale = 42; DataStruct newData; // This works if you know the type you are expecting! newData.ShortVale = (short)data.GetBoxedShortValue(); newData.IntValue = (int)data.GetBoxedIntValue(); newData.LongValue = (long)data.GetBoxedLongValue(); // But what about when you don't know? newData.ShortVale = data.GetBoxedShortValue(); // error newData.IntValue = data.GetBoxedIntValue(); // error newData.LongValue = data.GetBoxedLongValue(); // error } In each case, the integral types are consistent, so there should be some form of syntax that says "the object contains a simple type of X, return that as X (even though I don't know what X is)". Because the objects ultimately come from the same source, there really can't be a mismatch (short != long). I apologize for the contrived example, it seemed like the best way to demonstrate the syntax. Thanks.

    Read the article

  • 'Forward-Compatible' Program Design

    - by Jeffrey Kern
    The majority of my questions I've asked here so far on StackOverflow have been how to implement individual concepts and techniques towards developing a software-based NES clone via the XNA environment. The small samples that I've thrown together on my PC work relatively great and everything. Except I hit a brick wall. How do I merge all of these samples together. Having proof-of-concept is amazing, except when you need it to go beyond just that. I now have samples strewn about that I'm trying to merge, some of them incomplete. And now I'm stuck with the chicken-and-the-egg situation of where I would like to incorporate these samples together, to make sure they work, but I cannot without test data. And I don't have tools to create test data, because they'd need to be based off of the individual pieces that need to be put together. In my mind, I'm having nightmares with circular reference. For my sample data, I am hoping to save it in XML and write a specification - and then make sample data by hand - but I'm too paranoid of manually creating an XML file full of incorrect data and blaming it on my code, or vice-versa. It doesn't help that the end-result of my work is graphic-oriented, which makes it interseting how a graphic on the screen can be visualized in XML Nodes. I guess, my question is this: What design patterns and disciplines exist in the coding world that address this type of concern? I've always relied on brute-force coding and restarting a project with a whole new code base in attempts to further along my goals, but I doubt that would be the best way to do so. Within my college career, the majority of my programming was to work on simple projects that came out of a book, or with a given correct data set and a verifyable result. I don't have that, as my own design documents that I am going by could be terribly wrong.

    Read the article

  • Is ADO.NET Entity framework database schema update possible ?

    - by fyasar
    Hi All I'm working on proof of concept application like crm and i need your some advice. My application's data layer completely dynamic and run onto EF 3.5. When the user update the entity, change relation or add new column to the database, first i'm planning make for these with custom classes. After I rebuild my database model layer with new changes during the application runtime. And my model layer tie with tightly coupled to my project for easy reflecting model layer changes (It connected to my project via interfaces and loading onto to application domain in the runtime). I need to create dynamic entities, create entity relations and modify them during the runtime after that i need to create change database script for updating database schema. I know ADO.NET team says "we will be able to provide this property in EF 4.0", but i don't need to wait for them. How can i update database changes during the runtime via EF 3.5 ? For example, i need to create new entity or need to change some entity schema, add new properties or change property types after than how can apply these changes on the physical database schema ? Any ideas ?

    Read the article

  • conceptual question : what do performSelectorOnMainThread do?

    - by hib
    Hello all, I just come across this strange situation . I was using the technique of lazy image loading from apple examples . when I was used the class in my application it gave me topic to learn but don't what was actually happening there . So here goes the scenario : I think everybody has seen the apple lazytableimagesloading . I was reloading my table on finishing the parsing of data : - (void)didFinishParsing:(NSMutableArray *)appList { self.upcomingArray = [NSMutableArray arrayWithArray:loadedApps]; // we are finished with the queue and our ParseOperation [self.upcomingTableView reloadData]; self.queue = nil; // we are finished with the queue and our ParseOperation } but as a result the the connection was not starting and images were not loading . when I completely copy the lazyimageloading and I replaced the above code with the following code it works fine - (void)didFinishParsing:(NSMutableArray *)appList { [self performSelectorOnMainThread:@selector(handleLoadedApps:) withObject:appList waitUntilDone:NO]; self.queue = nil; // we are finished with the queue and our ParseOperation } So I want to know what is the concept behind this . Please let me know if you can not understand the question or details are not enough because I desperately want to know why it is like this ?

    Read the article

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • Open the Word Application from a button on a web page

    - by Andrea
    I'm developing a proof of concept web application: A web page with a button that opens the Word Application installed on the user's PC. I'm stuck with a C# project in Visual Studio 2008 Express (Windows XP client, LAMP server). I've followed the Writing an ActiveX Control in .NET tutorial and after some tuning it worked fine. Then I added my button for opening Word. The problem is that I can reference the Microsoft.Office.Interop.Word from the project, but I'm not able to access it from the web page. The error says "That assembly does not allow partially trusted callers". I've read a lot about security in .NET, but I'm totally lost now. Disclaimer: I'm into .NET since 4 days ago. I've tried to work around this issue but I cannot see the light!! I don't even know if it will ever be possible! using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Linq; using System.Text; using System.Windows.Forms; using Word = Microsoft.Office.Interop.Word; using System.IO; using System.Security.Permissions; using System.Security; [assembly: AllowPartiallyTrustedCallers] namespace OfficeAutomation { public partial class UserControl1 : UserControl { public UserControl1() { InitializeComponent(); } private void openWord_Click(object sender, EventArgs e) { try { Word.Application Word_App = null; Word_App = new Word.Application(); Word_App.Visible = true; } catch (Exception exc) { MessageBox.Show("Can't open Word application (" + exc.ToString() + ")"); } } } }

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >