Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 543/893 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • expand div when use clicks in text box?

    - by Joel
    I'd like to have a div expand to show contents when a user clicks their mouse in a text box to enter text. I haven't seen that around anywhere before. I know how to make things expand onClick, but can someone point me to what I'm looking for? Basically, I have an email signup box that I just want to show the input field for the email address, but if the user actually decided to get on the email list, that div will expand to also ask them for their name and zipcode. Thanks! I've got jquery already set for the email form now, so I'd like to expand on that if possible:

    Read the article

  • where do i put html files in my web-app folder for a lift project with maven?

    - by egervari
    I'm new to Lift framework for scala. For some reason, index.html resides in the web-app directory, and when I start up jetty, http://localhost:8080/ will point to that index.html file just fine. However, if I put a login.html file in the same folder as the index.html, and then go http://localhost:8080/login, Lift does not serve the file. Where do I need to put the files to get them register? I am a little lost because the behaviour only seems to work for index.html and nothing else. This is what happens when I view source in Chrome: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <body>The Requested URL /login was not found on this server</body> </html>

    Read the article

  • Why not speed up testing by using function dependency graph?

    - by Maltrap
    It seems logical to me that if you have a dependency graph of your source code (tree showing call stack of all functions in your code base) you should be able to save a tremendous amount of time doing functional and integration tests after each release. Essentially you will be able to tell the testers exactly what functionality to test as the rest of the features remain unchanged from a source code point of view. If for instance you change a spelling mistake in once piece of the code, there is no reason to run through your whole test script again "just in case" you introduced a critical bug. My question, why are dependency trees not used in software engineering and if you use them, how do you maintain them? What tools are available that generate these trees for C# .NET, C++ and C source code?

    Read the article

  • Writing a custom iterator -- what to do if you're at the end of the array?

    - by Goose Bumper
    I'm writing a custom iterator for a Matrix class, and I want to implement the increment method, which gets called when the iterator is incremented: void MatrixIterator::increment() { // go to the next element } Suppose the iterator has been incremented too many times and now points to past the end of the matrix (i.e. past the one-past-the-end point). What is the best practice for this situation? Should I catch this with an assert, or should I just say it's the user's responsibility to keep track of where the iterator is pointing and it's none of my business?

    Read the article

  • WPF with code only

    - by rwallace
    I've seen a lot of questions about the merits of WPF here, and essentially every answer says it's the bee's knees, but essentially every answer also talks about things like XAML, in many cases graphic designers and Expression Blend etc. My question is, is it worth getting into WPF if you're a solo coder working in C# only? Specifically, I don't have a graphic designer, nor any great talent in that area myself; I don't use point-and-click tools; I write everything in C#, not XML. Winforms works fine in those conditions. Is the same true of WPF, or does it turn out that important functions can only be done in XAML, the default settings aren't intended for actual use and you have to have a graphic designer on the team to make things look good, etc., and somebody in my position would be better off to stick to Winforms?

    Read the article

  • What's the correct way to stop a background process on Mac OS X?

    - by mcsheffrey
    I have an application with 2 components: a desktop application that users interact with, and a background process that can be enabled from the desktop application. Once the background process is enabled, it will run as a user launch agent independently of the desktop app. However, what I'm wondering is what to do when the user disables the background process. At this point I want to stop the background process but I'm not sure what the best approach is. The 3 options that I see are: Use the 'kill' command. Direct, but not reliable and just seems somewhat "wrong". Use an NSMachPort to send an exit request from the desktop app to the background process. This is the best approach I've thought of but I've run into an implementation problem (I'll be posting this in a separate query) and I'd like to be sure that the approach is right before going much further. Something else??? Thank you in advance for any help/insight that you can offer.

    Read the article

  • prolog to solve grammar involving braces

    - by Abhilash Muthuraj
    I'm trying to solve DCG grammar in prolog and succeeded upto a point, i'm stuck in evaluating the expressions involving braces like these. expr( T, [’(’, 5, +, 4, ’)’, *, 7], []), expr(Z) --> num(Z). expr(Z) --> num(X), [+], expr(Y), {Z is X+Y}. expr(Z) --> num(X), [-], expr(Y), {Z is X-Y}. expr(Z) --> num(X), [*], expr(Y), {Z is X*Y}. num(D) --> [D], {number(D)}. eval(L, V, []) :- expr(V, L, []).

    Read the article

  • How to see what objects lie in which generation in YourKit?

    - by prams
    I am using YourKit (11.0) to try to profile my j2ee app. The app uses java 6 and running on 64-bit linux (centos). I was told that YourKit possibly tells us which objects exist in which generation (eden, old, etc) at any given point of time. On a side note, I am trying to chase a problem where memory usage keeps increasing until a major collection happens (every 4 hrs) and I am suspicious about few particular objects, so I am interested to know where those objects lie at different times. Fortunately I know lot of memory is being consumed in one particular area of code (so other objects are possibly directly being put into the old gen), but don't exactly know how much of that memory is being put into eden space, how much is being collected by the minor collections, etc. Thanks.

    Read the article

  • Should I log my website's 404 errors?

    - by Ivan Zlatanov
    I have an ASP.NET website, but this question isn't really about technology, it is rather about practice. Should we log our 404 errors? My reasoning: This is a potential vulnerable point because a simple unfriendly user may fill up your hard drive in no time just by requesting wrong URLs! Some browsers often request resources up front - like for example favicon.ico, even if its not there. This is really annoying. But really I would like to know about a broken link if there exists one in my websites. Should I depend on the URL referrer? The problem with the URL referrer is that I cannot distinguish my internal redirect which may be broken with an unfriendly one from outside. What does the practice suggest?

    Read the article

  • How to nest a Location directive inside a virtual host config?

    - by Josh
    I am trying nest a Location directive inside a virtual host config like this: <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/deployer/apps/mysite/current/public ErrorLog /var/log/prod.log <Location "/shop"> DocumentRoot /home/deployer/apps/mysite_shop/current/public ErrorLog /var/log/prod.log </Location> </VirtualHost> What I want to do is go to mysite.com/shop, and point it to another application. Is this possible? Is there another method of doing this? I get an error because apparently Location directives do not accept DocumentRoot. Thanks.

    Read the article

  • GIS: line_locate_point() in Python

    - by miracle2k
    I'm pretty much a beginner when it comes to GIS, but I think I understand the basics - it doesn't seem to hard. But: All these acronyms and different libraries, GEOS, GDAL, PROJ, PCL, Shaply, OpenGEO, OGR, OGC, OWS and what not, each seemingly depending on any number of others, is slightly overwhelming me. Here's what I would like to do: Given a number of points and a linestring, I want to determine the location on the line closest to a certain point. In other words, what PostGIS's line_locate_point() does: http://postgis.refractions.net/documentation/manual-1.3/ch06.html#line%5Flocate%5Fpoint Except I want do use plain Python. Which library or libraries should I have a look at generally for doing these kinds of spatial calculations in Python, and is there one that specifically supports a line_locate_point() equivalent?

    Read the article

  • Luasql and SQLite?

    - by OverTheRainbow
    Hello I just got started looking at Lua as an easy way to access the SQLite DLL, but I ran into an error while trying to use the DB-agnostic LuaSQL module: require "luasql.sqlite" module "luasql.sqlite" print("Content-type: Text/html\n") print("Hello!") Note that I'm trying to start from the most basic setup, so only have the following files in the work directory, and sqlite.dll is actually the renamed sqlite3.dll from the LuaForge site: Directory of C:\Temp <DIR> luasql lua5.1.exe lua5.1.dll hello.lua Directory of C:\Temp\luasql sqlite.dll Am I missing some binaries that would explain the error? Thank you. Edit: I renamed the DLL to its original sqlite3.dll and updated the source to reflect this (originally renamed it because that's how it was called in a sample I found). At this point, here's what the code looks like... require "luasql.sqlite3" -- attempt to call field 'sqlite' (a nil value) env = luasql.sqlite() env:close() ... and the error message I'm getting: C:\>lua5.1.exe hello.lua lua5.1.exe: hello.lua:4: attempt to call field 'sqlite' (a nil value)

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Garbage Collection leak? Scripting Bridge leak?

    - by Y.Vera
    Hello everyone! I'm always really picky about memory leaks and I cannot understand why my garbage collected application leaks. My code is entirely memory-managed and it runs great without garbage collection, not a single leak. However, as soon as I turn on garbage collection it leaks! Just to prove a point, why does this leak in a garbage collected app? (place this dummy code at applicationDidFinishLaunching:) NSOpenPanel *panel = [NSOpenPanel openPanel]; [panel beginSheetModalForWindow:window completionHandler:^(NSInteger result) {NSBeep();}]; Also, is there a way to prevent leaks in apps (garbage collected or otherwise) that use Scripting Bridge? it seems as if they all leak, even the sample ones in xcode. Thanks everybody!

    Read the article

  • In SQL Server merge replication, how does reinitializing work?

    - by Craig Shearer
    I have set up a pull subscription to a merge publication in SQL Server. I use parameterized row filters on some tables. This works fine with the initial synchronization - just the rows using the filter arrive in the replicated (client) database. However, at some later point I'd like to be able to synchronize the replicated database again from the server and have new rows that match the parameterized row filters appear on the client database. The doucmentation seems to indicate that I can call Reinitialize() to do this. However, when I do try this and Synchronize again, I get an error saying that the script 'snapshot.pre' cannot be applied to the database. I've inspected the script and can see why - it's trying to drop some functions are used by the tables in the database. It would appear that for Reinitialize() to work it requires that the database be blank. Am I misunderstanding something here? Is there a way to make this work?

    Read the article

  • Is it a good idea to work on header files only, just at the start of the project?

    - by m4design
    To explain my point further, I'm a beginner in programming, and I'm working on a small project. Instead of separating the .cpp file from the header file, I'm implementing the code in the header files, and making one .cpp file for testing. I do this to have less files, hence easier navigation. Then later I'll separate the code as it should be. Will this cause any problems? should I continue doing that? Thanks.

    Read the article

  • Bookmarklet to edit current URL.

    - by garymc
    Hi, I'm looking for a simple bookmarklet to take the current URL of my website and refresh it with a couple of changes. For example: Take the current page: http://www.example.com/pages/ and change it to: https: //admin.example.com/pages/ * then load that new URL. *(I added an extra space after https:// because I'm a new user so can't have more than one link in my question) I tried searching for a bookmarklet that can do this but I couldn't find one. Can anyone point me in the right direction? Even a bookmarklet that does something like this that I can edit to suit my needs. Thanks for any help.

    Read the article

  • Which is better: string html generation or jquery DOM element creation?

    - by Ed Woodcock
    Ok, I'm rewriting some vanilla JS functions in my current project, and I'm at a point where there's a lot of HTML being generated for tooltips etc. My question is, is it better/preferred to do this: var html = '<div><span>Some More Stuff</span></div>'; if (someCondition) { html += '<div>Some Conditional Content</div>'; } $('#parent').append(html); OR var html = $('<div/>').append($('<span/>').append('Some More Stuff')); if (someCondition) { html.append($('<div/>').append('Some conditionalContent'); } $('#parent').append(html); ?

    Read the article

  • .htaccess file, IE not working Firefox, Safari & Chrome working

    - by user361284
    Hi, I've built a site in Interspire Web Publisher and it was working fine, seems to work in Firefox, Safari and Chrome but when I fired up Internet Explorer 7 & 8 only the home page works, all links to other pages show up nothing. Do you think it could have something to do with the .htaccess file? But why would it work at one point then not another? I did a test new site (its database driven) with 3 small pages and it worked fine on Internet Explorer....very weird! my website: http://www.artandepilepsy.com

    Read the article

  • What is the benefit of using int instead of bigint in this case?

    - by Yeti
    (MYSQL n00b) I have 3 tables: id = int(10), photo_id = bigint(20) PHOTO records limited to 3 million PHOTO: +-------+-----------------+ | id | photo_num | +-------+-----------------+ | 1 | 123456789123 | | 2 | 987654321987 | | 3 | 5432167894321 | +-------+-----------------+ COLOR: +-------+-----------------+---------+ | id | photo_num | color | +-------+-----------------+---------+ | 1 | 123456789123 | red | | 2 | 987654321987 | blue | | 3 | 5432167894321 | green | +-------+-----------------+---------+ SIZE: +-------+-----------------+---------+ | id | photo_num | size | +-------+-----------------+---------+ | 1 | 123456789123 | large | | 2 | 987654321987 | small | | 3 | 5432167894321 | medium | +-------+-----------------+---------+ Both COLOR and SIZE tables will have several million records. Q1: Is it better to change photo_num on COLOR and SIZE to int(10) and point it to PHOTO's id? Right now I use these: (PHOTO is no where in the picture) SELECT * from COLOR WHERE photo_num='xxx'; SELECT * from SIZE WHERE photo_num='xxx'; Q2: How will the SELECT query look if PHOTO id was used in COLOR, SIZE?

    Read the article

  • Time to ignore IE?

    - by Delan Azabani
    In this answer: http://stackoverflow.com/questions/2781013/does-anyone-have-a-easy-to-use-png-fix-for-ie/2781041#2781041 which got voted down considerably, I point out the need to ignore Internet Explorer, or at least its old version 6, for the following reasons: It is hard to hack for, and some features don't exist at all The more you hack for IE, the longer people blindly use it (vicious cycle) My website, azabani.com, doesn't hack for IE at all. The layout looks somewhat broken in the browser, and most of my projects require features not present in IE's codebase. I would like to know if you support my view, or if you share views with those who downvoted my answer.

    Read the article

  • SQLite Transaction fills a table BEFORE the transaction is commited

    - by user1500403
    Halo I have a code that creates a datatable (in memory) from a select SQL statement. However I realised that this datatable is filling during the procedure rather as a result of the transaction comit statment, it does the job but its slow. WHat amI doing wrong ? Inalready.Clear() 'clears a dictionary Using connection As New SQLite.SQLiteConnection(conectionString) connection.Open() Dim sqliteTran As SQLite.SQLiteTransaction = connection.BeginTransaction() Try oMainQueryR = "SELECT * FROM detailstable Where name= :name AND Breed= :Breed" Dim cmdSQLite As SQLite.SQLiteCommand = connection.CreateCommand() Dim oAdapter As New SQLite.SQLiteDataAdapter(cmdSQLite) With cmdSQLite .CommandType = CommandType.Text .CommandText = oMainQueryR .Parameters.Add(":name", SqlDbType.VarChar) .Parameters.Add(":Breed", SqlDbType.VarChar) End With Dim c As Long = 0 For Each row As DataRow In list.Rows 'this is the list with 500 names If Inalready.ContainsKey(row.Item("name")) Then Else c = c + 1 Form1.TextBox1.Text = " Fill .... " & c Application.DoEvents() Inalready.Add(row.Item("name"), row.Item("Breed")) cmdSQLite.Parameters(":name").Value = row.Item("name") cmdSQLite.Parameters(":Breed").Value = row.Item("Breed") oAdapter.Fill(newdetailstable) End If Next oAdapter.FillSchema(newdetailstable, SchemaType.Source) Dim z = newdetailstable.Rows.Count 'At this point the newdetailstable is already filled up and I havent even comited the transaction ' sqliteTran.Commit() Catch ex As Exception End Try End Using

    Read the article

  • ASP.NET MVC, JSON & non JavaScript clients

    - by redsquare
    I need to ensure that an application I am developing is accessable and also works with JavaScript turned off. I just need a pointer to assist with the following. I had 3 'chained' select boxes and I wanted JavaScript enabled clients to have a nice Ajax experience. I can easily write the required functionality to populate the chained boxes on the change event of the preceeding select using jQuery and JSON with a WCF service. However what about the non JavaScript client? Would I wrap a submit next to the select and place these inside their own form to post back with a certain action or different querstring parameter? Can the same controller give me a partial JSON response as well as feeding the full HTML response. Can anyone point me to a good demo that utilises both JSON and normal HTTP posts to produce the same result in ASP.NET MVC. All ASP.NET MVC demo/examples I see forget about the non JavaScript enabled client.

    Read the article

  • Does Msbuild recognise any build configurations other than DEBUG|RELEASE

    - by Dean
    I created a configuration named Test via Visual Studio which currently just takes all of DEBUG settings, however I employ compiler conditions to determine some specific actions if the build happens to be TEST|DEBUG|RELEASE. However how can I get my MSBUILD script to detect the TEST configuration?? Currently I build <MSBuild Projects="@(SolutionsToBuild)" Properties="Configuration=$(Configuration);OutDir=$(BuildDir)\Builds\" /> Where @(SolutionsToBuild) is a my solution. In the Common MsBuild Project Properties it states that $(Configuration) is a common property but it always appears blank? Does this mean that it never gets set but is simply reserved for my use or that it can ONLY detect DEBUG|RELEASE. If so what is the point in allowing the creation of different build configurations?

    Read the article

  • Using a RegEx in a SQL Query

    - by Jim B
    Hey Everyone, Here's the situation I'm in: We have a field in our database that contains a 3 digit number, surrounded by some text. This number is actually a PK in another table, and I need to extract this out so I can implement a proper FK relationship. Here's an example of what would currently reside in the column: Some Text Goes Here - (305) Followed By Some More Text So, what I'm looking to do is extract the '305' from the column, and hopefully end up with a result that looks something like this (pseudo code) SELECT <My Extracted Value>, Original Column Text, Id FROM dbo.MyTable It seems to me that using a Regex match in my query is the most effective way to do this. Can anybody point me in the right direction?

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >