Search Results

Search found 30301 results on 1213 pages for 'content db'.

Page 384/1213 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Flex: How to access movieclips within an imported swf

    - by squared
    Hello, I have imported a swf (not created with Flex, i.e. non-framework) into a Flex application. Once loaded, I would like to access movieclips within that imported swf. Looking at Adobe's docs (http://livedocs.adobe.com/flex/3/html/help.html?content=controls_15.html), it seems straightforward; however, their examples are between a Flex app and an imported swf (created with Flex). Like their example, I'm trying to use the SystemManager to access the imported swf's content; however, I receive the following error: TypeError: Error #1034: Type Coercion failed: cannot convert flash.display::MovieClip@58ca241 to mx.managers.SystemManager. Is this error occurring because I'm importing a non-framework swf into a framework swf? Thanks in advance for any assistance. Code: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:SWFLoader source="assets/test.swf" id="loader" creationComplete="swfLoaded()" /> <mx:Script> <![CDATA[ import mx.managers.SystemManager; [Bindable] public var loadedSM:SystemManager; private function swfLoaded():void { loadedSM = SystemManager(loader.content); } ]]> </mx:Script> </mx:Application>

    Read the article

  • problems with unpickling a 80 megabyte file in python

    - by tipu
    I am using the pickle module to read and write large amounts of data to a file. After writing to the file a 80 megabyte pickled file, I load it in a SocketServer using class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): print("in handle") words_file_handler = open('/home/tipu/Dropbox/dev/workspace/search/words.db', 'rb') words = pickle.load(words_file_handler) tweets = shelve.open('/home/tipu/Dropbox/dev/workspace/search/tweets.db', 'r'); results_per_page = 25 query_details = self.request.recv(1024).strip() query_details = eval(query_details) query = query_details["query"] page = int(query_details["page"]) - 1 return_ = [] booleanquery = BooleanQuery(MyTCPHandler.words) if query.find("(") > -1: result = booleanquery.processAdvancedQuery(query) else: result = booleanquery.processQuery(query) result = list(result) i = 0 for tweet_id in result and i < 25: #return_.append(MyTCPHandler.tweets[str(tweet_id)]) return_.append(tweet_id) i += 1 self.request.send(str(return_)) However the file never seems to load after the pickle.load line and it eventually halts the connection attempt. Is there anything I can do to speed this up?

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • I am totally unable to add a fileTree (JQuery fileTree addon) to my asp.net page

    - by Gadgetsan
    okay, so i have an asp.net (C#) application and i want to add a list of file and folders on the page, so i figured i should use JQuery fileTree (http://abeautifulsite.net/2008/03/jquery-file-tree/#download) but now i am totally unable to display the file list. I initialise the page this way: Site.Master: <link rel="stylesheet" type="text/css" href="../../Content/superfish.css" media="screen"> <link href="../../Content/jqueryFileTree.css" rel="stylesheet" type="text/css" /> <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <script src="../../Scripts/jquery.easing.1.3.js" type="text/javascript"></script> <script src="../../Scripts/jqueryFileTree.js" type="text/javascript"></script> <script src="../../Scripts/JqueryUI/js/jquery-ui-1.8.1.custom.min.js" type="text/javascript"></script> <script type="text/javascript" src="../../Scripts/jquery.dataTables.js"></script> <script type="text/javascript" src="../../Scripts/superfish.js"></script> <script type="text/javascript"> $(document).ready(function() { test = $('#fileTree').fileTree({script: "jqueryFileTree.aspx" }, function(file) { openFile(file); }); $("button").button(); oTable = $('#data').dataTable({ "bJQueryUI": true, "sPaginationType": "full_numbers", "bSort": true }); }); </script> and in the page, i put my div this way: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> Documents but i'm positive that jqueryFileTree.aspx is never "called" because if i return this page in my controller, it shows the list of files/folder correctly, so it's also not a problem with my aspx connector... Also i checked, on the JS console, it gives no error and there is nothing more in the page source code i've been trying to solve this all day without success so your help is apreciated

    Read the article

  • Discussion - Allowing / blocking user access to pages (Client Side Only!) - Javascript / Jquery

    - by Ozaki
    TLDR Using plain HTML / Javascript (Client Side) I want to prevent viewing of certain pages. The user will have to type a username and password and depending on that they get access to different pages. Answers can NOT include server side whatsoever It does not matter if they can break it easily. There is no sensitive information etc. Also the target audience will not have access to internet OR probably know what a cookie is... At some point the user will have to type username / password.(I can define the cookie here) Currently I thought of using cookies to set a cookie for each page to say "true" / "false" but that would get messy with so many cookies. Or setting an array within a cookie for each page? I have div field "#Content" which as it looks encompasses all of my content on the page so blocking out content will be as simple as replacing it with ("sorry you don't have access") etc. For Example: $.cookie("Access","page1, page2, page3"{ expires: 1 }); I am looking for anyway to do this does not have to be with cookies. Would be nice to get a discussion of different ways this can be done. So the question is: What do YOU think would be a good way to go about doing this with client side validation?

    Read the article

  • flex actionscript Datagridcolumn array

    - by Jad
    Hi, We have an AIR app and using a datagrid. We want to store the dataGrid.columns array in mySQL DB through PHP. This is needed because the user can customise the column headers of the datagrid and his preference needs to be stored and shown to him on his next login. Using HTTPService, we tried sending the dataGrid.columns array as a string, as follows, var ht:HTTPService = new HTTPService(); ht.url = Config.getServerURL(); ht.method = URLRequestMethod.POST; ht.resultFormat = "text"; ht.request["action"] = "updateGrid"; ht.request["headercolumns"] = colsArray.toString(); The data is stord as comma separated array string in DB. When we retrieve it back, cannot seem to cast it back to the DatagridColumns and assign it. Please let me know. Regards Jada.

    Read the article

  • Rails: Generated tokens missing occasionally

    - by Vincent Chan
    We generate an unique token for each user and store it on database. Everything is working fine in the local environment. However, after we upload the codes to the production server on Engine Yard, things become weird. We tried to register an account right after the deploy. It is working fine and we can see the token in the db. But after that, when we register new accounts, we cannot see any tokens. We only have NULL in the db. Not sure what caused this problem because we can't re-produce this in the local machine. Thanks for your help.

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • Taking Responsibilites too early - danger or boon?

    - by narresh
    Just I am only six months experienced guy in software industry.Due to recession season,a large volume of employees from our company are asked to leave.This impact affects the new inexperienced guys. The problem is We have to interact with the client directly to gather the requirement We have to design all HLD and LLD ,use cases and DB Diagrams ,that should staisfy the customers who keeping the high software insdutry standards In this do or die situation (Client gives us narrow dead line),we spend most of time with interacting client (only less time is available for development and testing,even sit out for 24 X 7 won't help us to finish our task) If task will not be finished with in the expected time frame ,that will trigger poor performance ration to our profile There are two important questions flashing in our mind : (1) Quit the job and do other business (2) Face the challenge : (if the option is to face the challenge ,What are all the tools should we follow to automate requirement gathering,testing,DB diagram,Code review and the rest.(We are working on ASP.NET 3.5 with SQL Server 2005).

    Read the article

  • Dynamic data validation in ASP.NET MVC

    - by user252160
    I've recently read about the model validation capabilities of ASP.NET MVC which are all very cool until a certain point. What happens if the application doesn't know the data that it works with because it is all stored in DB and built together at runtime. Just like in Drupal, I'd like to be able to define custom types at runtime, and assign runtime validation rules as well. Obviously, the idea of assigning attributes to well established models is now gone. What else could be done ? I am thinking in terms of rules being stored as JSON objects in the DB fields or something like that.

    Read the article

  • image with json and iphone

    - by alecnash
    Hi, I've got some images saved inside a db in a server and want to use them for my iphone. I created a php file so I can access them with json, got some questions though. I save all the text data from the db to a nsdictonary and it works fine. For the images should I use nsdata? Should I write a new php file for getting the image or is one enough because I am using an array to store my objects and then I use json_encode($arr) for json. Is it ok to store data to an array or is everything gonna crash?

    Read the article

  • Same data being returned by linq for 2 different executions of a stored procedure?

    - by Paul
    Hello I have a stored procedure that I am calling through Entity Framework. The stored procedure has 2 date parameters. I supply different argument in the 2 times I call the stored procedure. I have verified using SQL Profiler that the stored procedure is being called correctly and returning the correct results. When I call my method the second time with different arguments, even though the stored procedure is bringing back the correct results, the table created contains the same data as the first time I called it. dtStart = 01/08/2009 dtEnd = 31/08/2009 public List<dataRecord> GetData(DateTime dtStart, DateTime dtEnd) { var tbl = from t in db.SP(dtStart, dtEnd) select t; return tbl.ToList(); } GetData((new DateTime(2009, 8, 1), new DateTime(2009, 8, 31)) // tbl.field1 value = 45450 - CORRECT GetData(new DateTime(2009, 7, 1), new DateTime(2009, 7, 31)) // tbl.field1 value = 45450 - WRONG 27456 expected Is this a case of Entity Framework being clever and caching? I can't see why it would cache this though as it has executed the stored procedure twice. Do I have to do something to close tbl? using Visual Studio 2008 + Entity Framework. I also get the message "query cannot be enumerated more than once" a few times every now and then, am not sure if that is relevant? FULL CODE LISTING namespace ProfileDataService { public partial class DataService { public static List<MeterTotalConsumpRecord> GetTotalAllTimesConsumption(DateTime dtStart, DateTime dtEnd, EUtilityGroup ug, int nMeterSelectionType, int nCustomerID, int nUserID, string strSelection, bool bClosedLocations, bool bDisposedLocations) { dbChildDataContext db = DBManager.ChildDataConext(nCustomerID); var tbl = from t in db.GetTotalConsumptionByMeter(dtStart, dtEnd, (int) ug, nMeterSelectionType, nCustomerID, nUserID, strSelection, bClosedLocations, bDisposedLocations, 1) select t; return tbl.ToList(); } } } /// CALLER List<MeterTotalConsumpRecord> _P1Totals; List<MeterTotalConsumpRecord> _P2Totals; public void LoadData(int nUserID, int nCustomerID, ELocationSelectionMethod locationSelectionMethod, string strLocations, bool bIncludeClosedLocations, bool bIncludeDisposedLocations, DateTime dtStart, DateTime dtEnd, ReportsBusinessLogic.Lists.EPeriodType durMainPeriodType, ReportsBusinessLogic.Lists.EPeriodType durCompareToPeriodType, ReportsBusinessLogic.Lists.EIncreaseReportType rptType, bool bIncludeDecreases) { ///Code for setting properties using parameters.. _P2Totals = ProfileDataService.DataService.GetTotalAllTimesConsumption(_P2StartDate, _P2EndDate, EUtilityGroup.Electricity, 1, nCustomerID, nUserID, strLocations, bIncludeClosedLocations, bIncludeDisposedLocations); _P1Totals = ProfileDataService.DataService.GetTotalAllTimesConsumption(_StartDate, _EndDate, EUtilityGroup.Electricity, 1, nCustomerID, nUserID, strLocations, bIncludeClosedLocations, bIncludeDisposedLocations); PopulateLines() //This fills up a list of objects with information for my report ready for the totals to be added PopulateTotals(_P1Totals, 1); PopulateTotals(_P2Totals, 2); } void PopulateTotals(List<MeterTotalConsumpRecord> objTotals, int nPeriod) { MeterTotalConsumpRecord objMeterConsumption = null; foreach (IncreaseReportDataRecord objLine in _Lines) { objMeterConsumption = objTotals.Find(delegate(MeterTotalConsumpRecord t) { return t.MeterID == objLine.MeterID; }); if (objMeterConsumption != null) { if (nPeriod == 1) { objLine.P1Consumption = (double)objMeterConsumption.Consumption; } else { objLine.P2Consumption = (double)objMeterConsumption.Consumption; } objMeterConsumption = null; } } } }

    Read the article

  • free public databases with non-trivial table structures?

    - by Caffeine Coma
    I'm looking for some sample database data that I can use for testing and demonstrating a DB tool I am working on. I need a DB that has (preferably) many tables, and many foreign key relationships between the tables. Ideally the data would be in SQL dump format, or at least in something that maintains the foreign key references, and could be easily import into an RDBMS (MySQL or H2). The dataset itself doesn't have to be huge (in fact, best if it's not). I thought about using the Stackoverflow Data Dump, but it's only about 5 tables.

    Read the article

  • ASP.NET Connection time out after being idle for a while

    - by yazz
    My ASP.NET website while trying to connect to the database for first time after a period of inactivity throws an time out exception. I understand the connections in the connection pool get terminated after some idle time for some reason (Firewall or Oracle settings) and the pool or app doesn't have a clue about it. Is there any way to validate the connection beforehand so that the first try doesn't throw an exception? I don't have much control over the DB or Firewall settings. So I have to deal with this is my application.(would prefer if there is any web.config settings) I am using: ASP.NET 2.0. Oracle server 11g, Microsoft Enterprise Library DAAB to do all my DB operations. I did some search on this topic but didnt find any solid solution for this yet :(

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • What are the pros and cons to keeping SQL in Stored Procs versus Code

    - by Guy
    What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security

    Read the article

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • VB.NET - ASP.NET - MS-Access - SQL Statement

    - by Brian
    I have a button which when pressed, sets the user's rights in the db. (If Administrator UserTypeID is set to '2' and if Customer it is set to '1'). However when I run the below code, everything remains the same. I think it's from the SQL statement but I;m not sure. Can anyone help please? Protected Sub btnSetUser_Click(sender As Object, e As System.EventArgs) Handles btnSetUser.Click Dim conn As New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Users\Brian\Documents\Visual Studio 2010\WebSites\WebSite3\db.mdb;") Dim cmd As OleDbCommand = New OleDbCommand("UPDATE [User] SET [UserTypeID] WHERE Username=?", conn) conn.Open() cmd.Parameters.AddWithValue("@Username", txtUser.Text) If ddUserType.SelectedItem.Text = "Administrator" Then cmd.Parameters.AddWithValue("@UserTypeID", "2") cmd.ExecuteNonQuery() lblSetUser.Text = txtUser.Text + "was set to Administrator." ElseIf ddUserType.SelectedItem.Text = "Customer" Then cmd.Parameters.AddWithValue("@UserTypeID", "1") cmd.ExecuteNonQuery() lblSetUser.Text = txtUser.Text + "was set to Customer." End If conn.Close() End Sub End Class

    Read the article

  • How do I do automatic data serialization of data objects in Haskell

    - by Adam Gent
    One of the huge benefits in languages that have some sort of reflection/introspecition is that objects can be automatically constructed from a variety of sources. For example in Java I can use the same objects for persisting to a db (with Hibernate) serializing to XML (with JAXB) or serializing to JSON (json-lib). You can do the same in Ruby and Python also usually following some simple rules for properties or annotations for Java. Thus I don't need lots "Domain Transfer Objects". I can concentrate on the domain I am working in. It seems in very strict FP like Haskell and Ocaml this is not possible. Particularly Haskell. The only thing I have seen is doing some sort of preprocessing or meta-programming (ocaml). Is it just accepted that you have to do all the transformations from the bottom upwards? In other words you have to do lot of boring work to turn a data type in haskell into JSON/XML/DB Row object and back again into a data object.

    Read the article

  • The Microsoft Jet database engine cannot open the file '(unknown)'.

    - by murali
    hi i am oracle DB. i am uploading keywords into the DB. but i am getting the error java.sql.SQLException: [Microsoft][ODBC Excel Driver] The Microsoft Jet database engine cannot open the file '(unknown)'. It is already opened exclusively by another user, or you need permission to view its data. at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6998) at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:7155) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:3106) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(JdbcOdbcConnection.java:355) at sun.jdbc.odbc.JdbcOdbcDriver.connect(JdbcOdbcDriver.java:209) at java.sql.DriverManager.getConnection(DriverManager.java:539) at java.sql.DriverManager.getConnection(DriverManager.java:211) at keywordsreader.main(keywordsreader.java:28) how to reslove this type error...plz help me..

    Read the article

  • Evaluating Django Chained QuerySets Locally

    - by jnadro52
    Hello All: I am hoping someone can help me out with a quick question I have regarding chaining Django querysets. I am noticing a slow down because I am evaluating many data points in the database to create data trends. I was wondering if there was a way to have the chained filters evaluated locally instead of hitting the database. Here is a (crude) example: pastries = Bakery.objects.filter(productType='pastry') # <--- will obviously always hit DB, when evaluated cannoli = pastries.filter(specificType='cannoli') # <--- can this be evaluated locally instead of hitting the DB when evaluated, as long as pastries was evaluated? I have checked the docs and I do not see anything specifying this, so I guess it's not possible, but I wanted to check with the 'braintrust' first ;-). BTW - I know that I can do this myself by implementing some methods to loop through these datapoints and evaluate the criteria, but there are so many datapoints that my deadline does not permit me manually implementing this. Thanks in advance.

    Read the article

  • Jquery problem - cant expand the row above in a table

    - by apg1985
    Hi People, What I cant figure out is how I would toggle a row in a table using the one below it. So say I have a table with 2 rows the first contains content and the one below contains a button, when the page loads the content row is hidden and when you click the button it toggles the content row on and off. In the example the first table works but the second does not, I need the second one to work. <!DOCTYPE HTML> <html> <head> <title>Testing Horizontal Accordion</title> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { $(".sectionhead").toggle( function() { $(this).next("tr").hide(); }, function() { $(this).next("tr").show(); } ) }); </script> </head> <body> <table> <tr class="sectionhead"><td></td></tr> <tr class="child"><td>child</td></tr> </table> <br> <table> <tr class="child"><td>child</td></tr> <tr class="sectionhead"><td></td></tr> </table> </body> </html>

    Read the article

  • LINQ-to-SQL and SQL Compact - database file sharing problem

    - by Eye of Hell
    Hello. I'm learing LINQ-to-SQL right now and i have wrote a simple application that define SQL data: [Table( Name = "items" )] public class Item { [ Column( IsPrimaryKey = true, IsDbGenerated = true ) ] public int Id; [ Column ] public string Name; } I have launched 2 copy of application connected to the same .sdf file and tested if all database modifications in one application affects another application. But strange thing arise. If i use InsertOnSubmit() and DeleteOnSubmit() in one application, added/removed items are instantly visible in other application via 'select' LINQ queue. But if i try to modify 'Name' field in one application, it is NOT visible in other applicaton until it reconnects the database :(. The test code i use: var Items = from c in db.Items where Id == c.Id select c; foreach( var Item in Items ) { Item.Name = "new name"; break; } db.SubmitChanges(); Can anyone suggest what i'm doing wrong and why InsertOnSubmit()/DeleteOnSubmit works and SubmitChanges() don't?

    Read the article

  • SQLite table query

    - by soclose
    Hi I query the table by using this function below public Cursor getTableInfo() throws SQLException { return db.query(TableName, null, null, null, null, null, null); } I got the error "View Root.handleMessage(Message)line:1704". I could insert the data but can't query the data. I called this function below Cursor c = db.getTableInfo(); int cRow = c.getCount(); if (cRow == 0) { Toast.makeText(NewContact.this, "No Record", Toast.LENGTH_LONG).show(); } Please help me.

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >