Search Results

Search found 2460 results on 99 pages for 'dave carpeneto'.

Page 95/99 | < Previous Page | 91 92 93 94 95 96 97 98 99  | Next Page >

  • onsubmit() does not work.

    - by the Hampster
    Here is something that I find disturbing. I created a small form, and I'm using AJAX to validate for me. I have a javascript function authenticate() that works sometimes. <form method="post" action="" id="login_form" onsubmit="authenticate()";> // various inputs <input type="button" onclick="authenticate()" value="Log In"> </form> authenticate() works just fine when I click the button. However, if I press enter the form is submitted, and it fails. It also fails if I call onSubmit(). In my debugging, I alert the outgoing texts-- they are identical. However, the Prototype Ajax function calls it the onSuccess, but there is just no response from the server. (The server outputs either "Success" or "Failure"). Why the different behaviors for onClick() vs onSubmit()? The exact same function is called, but the results are different. Any help would be appreciated. --Dave

    Read the article

  • Is there a way to get an ASMX Web Service created in VS 2005 to receive and return JSON?

    - by Ben McCormack
    I'm using .NET 2.0 and Visual Studio 2005 to try to create a web service that can be consumed both as SOAP/XML and JSON. I read Dave Ward's Answer to the question How to return JSON from a 2.0 asmx web service (in addition to reading other articles at Encosia.com), but I can't figure out how I need to set up the code of my asmx file in order to work with JSON using jQuery. Two Questions: How do I enable JSON in my .NET 2.0 ASMX file? What's a simple jQuery call that could consume the service using JSON? Also, I notice that since I'm using .NET 2.0, I i'm not able to implement using System.Web.Script.Services.ScriptService. Here's my C# code for the demo ASMX service: using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; /// <summary> /// Summary description for StockQuote /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class StockQuote : System.Web.Services.WebService { public StockQuote () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public decimal GetStockQuote(string ticker) { //perform database lookup here return 8; } [WebMethod] public string HelloWorld() { return "Hello World"; } } Here's a snippet of jQuery I found on the internet and tried to modify: $(document).ready(function(){ $("#btnSubmit").click(function(event){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://bmccorm-xp/WebServices/HelloWorld.asmx", data: "", dataType: "json" }) event.preventDefault(); }); });

    Read the article

  • What are the useful UNIX functions that MS doesn't implement? And why? [closed]

    - by prosseek
    When programming with Python, I came across some functions that are not implemented on Windows. os.fork() may be one of them. UNIX came before WinNT, so the WinNT developers (most notably Dave Cutler) must knew about the features and functions of the UNIX. But, to me, it seems that MS didn't like UNIX so much that they mistakenly/intentionally skipped or distorted some of the useful UNIX functions/features; i.e. /abc/def in UNIX, \abc\def in Windows as an easy example. And when I read the Windows System Programming book, I felt uncomfortable as the Windows system functions seem nothing more than a tweak from UNIX. (I might be wrong.) What are those functions/features that MS OSes don't have, but UNIX origninated? Is there any reason for this? Do they just want to differentiate from UNIX world? Or do they think some of the UNIX functions are unnecessary? Is Windows a tweak from UNIX? Or, is there any great OS features that were invented in MS to make Windows better than UNIX?

    Read the article

  • NSFetchedResultsController is driving me crazy

    - by user267980
    Hi everyone. i've been building an app since 1 month using NSFetchedResultsController and i was testing the app on the 3.1.2 SDK. The poblem is that i've been using NSFetchedResultsController everywhere in my app and was working on the 3.1.2 version of the SDK, now my client say that i should make it compatible with the 3.0 version and the deadline is almost there. But is crashing everytime i change an object handled by the contoller, the application is crashing with very weird errors. The problem occure when removing the last object in a section and when a change make an object love to another section. I've been using a sample code from "More iPhone 3 Development Tackling iPhone SDK 3" by Dave Mark and Jeff LaMarche. I've also included some changes from link text Here is a sample output of the console when the application is crashing. * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of sections. The number of sections contained in the table view after the update (1) must be equal to the number of sections contained in the table view before the update (2), plus or minus the number of sections inserted or deleted (2 inserted, 0 deleted).' 2010-03-14 16:23:29.758 Instaproofs[5879:207] Stack: ( 807902715, 7364425, 807986683, 811271572, 815059090, 815007323, 211023, 4363331, 810589786, 807635429, 810579728, 3620573, 3620227, 3614682, 3609719, 27337, 810595174, 807686849, 807683624, 839142449, 839142646, 814752238 ) If i knew that NSFetchedResultsController is so buggy, i would never used it. So basicaly i need the my NSFetchedResultsControllerDelegate to work fine on the 3.0 and above SDKs. It would be life saver if someone help me figure out what i'm doing wrong.

    Read the article

  • Making RDoc Ruby Gem Default on Mac OS X

    - by jkale
    Hey all, I've recently installed RDoc version (2.4.3) through Ruby gems to replace the one shipped with Mac OS X (version 1.0.1). Unfortunately, I can still only use RDoc 1.0.1 when I call run "rdoc" at the command line. rdoc -v returns: RDoc V1.0.1 - 20041108 I tried amending the $PATH variable to point the first entry to the RDoc 2.4.3 folder but no luck. I couldn't find anything about this online either, so I thought I'd ask here. Cheers! Update: Running "gem list -d --version 1.0.1 rdoc" returns: *** LOCAL GEMS *** rdoc (2.4.3) Authors: Eric Hodel, Dave Thomas, Phil Hagelberg, Tony Strauss Rubyforge: http://rubyforge.org/projects/rdoc Homepage: http://rdoc.rubyforge.org Installed at: /usr/local/lib/ruby/gems/1.8 RDoc is an application that produces documentation for one or more Ruby source files Therefore, it's definitely the Mac OSX version of RDoc that's interfering with the Gems version. Update 2: I found out, using: `bash --debugger rdoc` that the old version of RDoc was in /opt/local/bin. I deleted it and added my gems directory to my $PATH `export PATH=/usr/local/lib/ruby/gems/1.8/gems/` I now have a fresh working copy of the latest RDoc!

    Read the article

  • Serializing a DataType="time" field using XmlSerializer

    - by CraftyFella
    Hi, I'm getting an odd result when serializing a DateTime field using XmlSerializer. I have the following class: public class RecordExample { [XmlElement("TheTime", DataType = "time")] public DateTime TheTime { get; set; } [XmlElement("TheDate", DataType = "date")] public DateTime TheDate { get; set; } public static bool Serialize(Stream stream, object obj, Type objType, Encoding encoding) { try { using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Encoding = encoding })) { var xmlSerializer = new XmlSerializer(objType); if (writer != null) xmlSerializer.Serialize(writer, obj); } return true; } catch (Exception) { return false; } } } When i call the use the XmlSerializer with the following testing code: var obj = new RecordExample {TheDate = DateTime.Now.Date, TheTime = new DateTime(0001, 1, 1, 12, 00, 00)}; var ms = new MemoryStream(); RecordExample.Serialize(ms, obj, typeof (RecordExample), Encoding.UTF8); txtSource2.Text = Encoding.UTF8.GetString(ms.ToArray()); I get some strange results, here's the xml that is produced: <?xml version="1.0" encoding="utf-8"?> <RecordExample xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <TheTime>12:00:00.0000000+00:00</TheTime> <TheDate>2010-03-08</TheDate> </RecordExample> Any idea's how i can get the "TheTime" element to contain a time which looks more like this: <TheTime>12:00:00.0Z</TheTime> ...as that's what i was expecting? Thanks Dave

    Read the article

  • How do I test database-related code with NUnit?

    - by Michael Haren
    I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic: http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx http://haacked.com/archive/2005/12/28/11377.aspx These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes. That's great but... Does this functionality exists somewhere in NUnit natively? Has this technique been improved upon in the last 4 years? Is this still the best way to test database-related code? Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one. Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.

    Read the article

  • Are Fortran control characters (carriage control) still implemented in compilers?

    - by CmdrGuard
    In the book Fortran 95/2003 for Scientists and Engineers, there is much talk given to the importance of recognizing that the first column in a format statement is reserved for control characters. I've also seen control characters referred to as carriage control on the internet. To avoid confusion, by control characters, I refer to the characters "1, a blank (i.e. \s), 0, and +" as having an effect on the vertical spacing of output when placed in the first column (character) of a FORMAT statement. Also, see this text-only web page written entirely in fixed-width typeface : Fortran carriage-control (because nothing screams accuracy and antiquity better than prose in monospaced font). I found this page and others like it to be not quite clear. According to Fortran 95/2003 for Scientists and Engineers, failure to recall that the first column is reserved for carriage control can lead to horrible unintended output. Paraphrasing Dave Barry, type the wrong character, and nuclear missiles get fired at Norway. However, when I attempt to adhere to this stern warning, I find that gfortran has no idea what I'm talking about. Allow me to illustrate my point with some example code. I am trying to print out the number Pi: PROGRAM test_format IMPLICIT NONE REAL :: PI = 2 * ACOS(0.0) WRITE (*, 100) PI WRITE (*, 200) PI WRITE (*, 300) PI 100 FORMAT ('1', "New page: ", F11.9) 200 FORMAT (' ', "Single Space: ", F11.9) 300 FORMAT ('0', "Double Space: ", F11.9) END PROGRAM test_format This is the output: 1New page: 3.141592741 Single Space: 3.141592741 0Double Space: 3.141592741 The "1" and "0" are not typos. It appears that gfortran is completely ignoring the control character column. My question, then, is this: Are control characters still implemented in standards compliant compilers or is gfortran simply not standards compliant? For clarity, here is the output of my gfortran -v Using built-in specs. Target: powerpc-apple-darwin9 Configured with: ../gcc-4.4.0/configure --prefix=/sw --prefix=/sw/lib/gcc4.4 --mandir=/sw/share/man --infodir=/sw/share/info --enable-languages=c,c++,fortran,objc,java --with-gmp=/sw --with-libiconv-prefix=/sw --with-ppl=/sw --with-cloog=/sw --with-system-zlib --x-includes=/usr/X11R6/include --x-libraries=/usr/X11R6/lib --disable-libjava-multilib --build=powerpc-apple-darwin9 --host=powerpc-apple-darwin9 --target=powerpc-apple-darwin9 Thread model: posix gcc version 4.4.0 (GCC)

    Read the article

  • C# ASP.NET Binding Controls via Generic Method

    - by OverTech
    I have a few web applications that I maintain and I find myself very often writing the same block of code over and over again to bind a GridView to a data source. I'm trying to create a Generic method to handle data binding but I'm having trouble getting it to work with Repeaters and DataLists. Here is the Generic method I have so far: public void BindControl<T>(T control, SqlCommand sql) where T : System.Web.UI.WebControls.BaseDataBoundControl { cmd = sql; cn.Open(); SqlDataReader dr = cmd.ExecuteReader(); if (dr.HasRows) { control.DataSource = dr; control.DataBind(); } dr.Close(); cn.Close(); } That way I can just define my CommandText then make a call to "BindControls(myGridView, cmd)" instead of retyping this same basic block of code every time I need to bind a grid. The problem is, this doesn't work with Repeaters or DataLists. Each of these controls inherit their respective "DataSource" and "DataBind" methods from different classes. Someone on another forum suggested that I implement an interface, but I'm not sure how to make that work either. The GridView, Datalist and Repeater get their respective "DataBind" methods from BaseDataBoundControl, BaseDataList, and Repeater classes. How would I go about creating a single interface to tie them all together? Or am I better off just using 3 overloads for this method? Dave

    Read the article

  • WordPress Get 1 attachment per post from X category outside the loop

    - by davebowker
    Hey, Hope someone can help with this. What I want is to display 5 attachments but only 1 attachment from each post from a specific category in the sidebar, which links to the posts permalink. I'm using the following code so far which gets all attachments from all posts, but some posts have more than 1 attachment and I just want to show the first one, and link it to the permalink of the post. So although the limit is 5 posts, if one post has 4 attachments then currently it will show 4 from one, and 1 from the other totalling 5, when what I want it to do is just show 1 from each of 5 different posts. <?php $args = array( 'post_type' => 'attachment', 'numberposts' => 5, 'post_status' => null, 'post_parent' => null, // any parent 'category_name' => 'work', ); $attachments = get_posts($args); if ($attachments) { foreach ($attachments as $post) { setup_postdata($post); the_title(); the_permalink(); the_attachment_link($post->ID, false); the_excerpt(); } } ?> Cheers. Dave

    Read the article

  • Multiple page technique

    - by the Hampster
    Six years ago I wrote an large web application. It was my very first php program, and it grew, and grew. I used a lot of forms, and some very (atrocious) techniques to give an AJAX "look" to the site, and to handle multiple users with various permissions. I am looking to expand this, and sell it to other companies in the field, but I need to seriously modernize it. Currently, it loads index.php where you log in. The user can then select from various "students" or "participants" (We work with people with disabilities). From there, you can select different lesson plans or life skills that are being worked on. There are many other branches as well. At each stage, a different page is served: Sel_Paricipant.php, Sel_Program.php, etc. Are there any fundamental problems with doing it this way? If so, what are some good techniques for eliminating this, so that people only see the TLD in their address bar? Thanks in advance. --Dave

    Read the article

  • Byte-Pairing for data compression

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Dave, what I read was: "The US patent office no longer grants patents on perpetual motion machines, but has recently granted at least two patents on a mathematically impossible process: compression of truly random data." I was not inferring the Patent Office was actually considering what I am inquiring about. I was merely commenting on the notion of a "mathematically impossible process." If someone has, in some way created a method of having a "single" data byte as a placeholder of 8 individual bytes of data, that would be a consideration for a patent. Now, about the mathematically impossibility of an 8 to 1 compression method, it is not so much a mathematically impossibility, but a series of rules and conditions that can be created. As long as there is the rule of 8 or 16 bit representation of storing data on a medium, there are ways to manipulate data that mirrors current methods, or creation by a new way of thinking.

    Read the article

  • table subtraction challenge

    - by Valentin
    I have a challenge that I haven’t overcome in the last two days using Stored Procedures and SQL 2008. I took several approaches but must fell short. One appraoch very interesting was using a table substraction. It’s really all about table subtraction. I was wondering if you could help me crack this one. Here is the challenge: Two tables 1Testdb y 2Testdb. My first step was to select ID relationships ([2Testdb].Acc_id) on table 2Testdb for one given individual ([2Testdb].Bus_id). Then query table 1Testdb for records not mathcing my original selection from 2Testdb. But other approaches are welcome. Data and Structures: USE [Challengedb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[1Testdb]( [Acc_id] [uniqueidentifier] NULL [Name] [Varchar(10)] NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[2Testdb]( [Acc_id] [uniqueidentifier] NULL, [Bus_id] [uniqueidentifier] NULL ) ON [PRIMARY] GO Records on 1Testdb: 34455F60-9474-4521-804E-66DB39A579F3, John C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete DC711615-3BE4-4B31-9EF2-B1314185CA62, Dave E3AAB073-2398-476D-828B-92829F686A4C, Adam Records on 2Testdb: (Relationship table, ex. Friend relationships) Record #1: DC711615-3BE4-4B31-9EF2-B1314185CA62, 34455F60-9474-4521-804E-66DB39A579F3 Record #2: E3AAB073-2398-476D-828B-92829F686A4C, 34455F60-9474-4521-804E-66DB39A579F3 Record # 3: DC711615-3BE4-4B31-9EF2-B1314185CA62, E3AAB073-2398-476D-828B-92829F686A4C Record # 4: E3AAB073-2398-476D-828B-92829F686A4C, DC711615-3BE4-4B31-9EF2-B1314185CA62 Challenge: Select from table 1Testdb only those records distinct that may not have a relationship with John [34455F60-9474-4521-804E-66DB39A579F3] on table 2Testdb. Expected result should be (Who does John doesn’t have relationship with?): C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete Thank you, Valentin

    Read the article

  • Event not triggering

    - by the Hampster
    I have no idea where to start on this one. I have a that does not appear until a button is clicked. This function call works: onclick="highlight('mod_sup_div', true);" function highlight(aDiv,show) { if (show) { Effect.Appear('Overlay',{duration: 0.5, to: .80}); Effect.Appear(aDiv,{duration: 0.5}) } else { Effect.Fade('Overlay',{duration: 0.5, to: .80}); Effect.Fade(aDiv,{duration: 0.5}) } } In the <div> I have a button to close the window. <p class="closer"><span onclick="highlight('mod_sup_div',false)">X</span></p> This does not work. The function is not even called, as I made a alert() the first line of the function at it does nothing. What is odd, is that onclick="Effect.Fade(aDiv,{duration: 0.5})" does work. Other simple javascript functions in the onclick="" work, except for the function call. Any help as to why this is happening would be very appreciated. Thanks, Dave

    Read the article

  • Presenting collection of structures of strings in grid or similar in WPF - Example? Ideas?

    - by Andrew
    Hi, I have a collection of structures. The structure is just some strings. Example public struct ReportLine { public string Name; public string Address; public string Phone; ...// about 10 other strings } I can't change this part. What I want to do is display it in a simple grid or simlar in WPF. My only requirements are: a) need column headers b) rows must alternate in color c) columns big enough to hold largest datum (which is not know until run time) Can someone point me to an example to get me started? Is the GridView the way to go? Or DataGrid? Or perhaps just the grid? I have the book Pro WPF in C# 2008 and it covers binding ListBox's to collections, but the collections always seem to be collections of one field (ex. a collection of 40 names). Here I have a collection (an array in fact) of a structure. How do I setup the databinding? As you can see, I'm new to this and probably would most benefit from a reference to an article. I've found articles covering collections of 1 field, but no examples covering binding to an array of structures. Also, my intitial research indicates c) can't be easily done, if you demand that the column is big enough for all the data in that column, not just the visible data. Thanks, dave

    Read the article

  • Changing type of object in a conditional

    - by David Doria
    I'm having a bit of trouble with dynamic_casting. I need to determine at runtime the type of an object. Here is a demo: include include class PersonClass { public: std::string Name; virtual void test(){}; //it is annoying that this has to be here... }; class LawyerClass : public PersonClass { public: void GoToCourt(){}; }; class DoctorClass : public PersonClass { public: void GoToSurgery(){}; }; int main(int argc, char *argv[]) { PersonClass* person = new PersonClass; if(true) { person = dynamic_cast(person); } else { person = dynamic_cast(person); } person-GoToCourt(); return 0; } I would like to do the above. The only legal way I found to do it is to define all of the objects before hand: PersonClass* person = new PersonClass; LawyerClass* lawyer; DoctorClass* doctor; if(true) { lawyer = dynamic_cast(person); } else { doctor = dynamic_cast(person); } if(true) { lawyer-GoToCourt(); } The main problem with this (besides having to define a bunch of objects that won't be use) is that I have to change the name of the 'person' variable. Is there a better way? (I am not allowed to change any of the classes (Person, Lawyer, or Doctor) because they are part of a library that people who will use my code have and won't want to change). Thanks, Dave

    Read the article

  • Create Extension Method to Produce Open & Closing Tags like Html.BeginForm()

    - by DaveDev
    Hi Guys I wonder if it's possible to create an extension method which has functionality & behaviour similar to Html.BeginForm(), in that it would generate a complete Html tag, and I could specificy its contents inside <% { & } %> tags. For example, I could have a view like: <% using(Html.BeginDiv("divId")) %> <% { %> <!-- Form content goes here --> <% } %> This capability would be very useful in the context of the functionality I'm trying to produce with the example in this question This would give me the ability to create containers for the types that I'll be <% var myType = new MyType(123, 234); %> <% var tag = new TagBuilder("div"); %> <% using(Html.BeginDiv<MyType>(myType, tag) %> <% { %> <!-- controls used for the configuration of MyType --> <!-- represented in the context of a HTML element, e.g.: --> <div class="MyType" prop1="123" prop2="234"> <!-- add a select here --> <!-- add a radio control here --> <!-- whatever, it represents elements in the context of their type --> </div> <% } %> I realise this will produce invalid XHTML, but I think there could be other benefits that outweigh this, especially since this project doesn't require that the XHTML validate to the W3C standards. Thanks Dave

    Read the article

  • How do you send an extended-ascii AT-command (CCh) from Android bluetooth to a serial device?

    - by softex
    This one really has me banging my head. I'm sending alphanumeric data from an Android app, through the BluetoothChatService, to a serial bluetooth adaptor connected to the serial input of a radio transceiver. Everything works fine except when I try to configure the radio on-the-fly with its AT-commands. The AT+++ (enter command mode) is received OK, but the problem comes with the extended-ascii characters in the next two commands: Changing the radio destination address (which is what I'm trying to do) requires CCh 10h (plus 3 hex radio address bytes), and exiting the command mode requires CCh ATO. I know the radio can be configured OK because I've done it on an earlier prototype with the serial commands from PIC basic, and it also can be configured by entering the commands directly from hyperterm. Both these methods somehow convert that pesky CCh into a form the radio understands. I've have tried just about everything an Android noob could possibly come up with to finagle the encoding such as: private void command_address() { byte[] addrArray = {(byte) 0xCC, 16, 36, 65, 21, 13}; CharSequence addrvalues = EncodingUtils.getString(addrArray, "UTF-8"); sendMessage((String) addrvalues); } but no matter what, I can't seem to get that high-order byte (CCh/204/-52) to behave as it should. All other (< 127) bytes, command or data, transmit with no problem. Any help here would be greatly appreciated. -Dave

    Read the article

  • Sqlite3 INSERT INTO Question × 377

    - by user316717
    Hi, My 1st post. I am creating an exercise app that will record the weight used and the number of "reps" the user did in 4 "Sets" per day over a period of 7 days so the user may view their progress. I have built the database table named FIELDS with 2 columns ROW and FIELD_DATA and I can use the code below to load the data into the db. But the code has a sql statement that says, INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); When I change the statment to: INSERT INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); Nothing happens. That is no data is recorded in the db. Below is the code: #define kFilname @"StData.sqlite3" - (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kFilname]; } -(IBAction)saveData:(id)sender; { for (int i = 1; i <= 8; i++) { NSString *fieldName = [[NSString alloc]initWithFormat:@"field%d", i]; UITextField *field = [self valueForKey:fieldName]; [fieldName release]; NSString *insert = [[NSString alloc] initWithFormat: @"INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA) VALUES (%d, '%@');",i, field.text]; // sqlite3_stmt *stmt; char *errorMsg; if (sqlite3_exec (database, [insert UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { // NSAssert1(0, @"Error updating table: %s", errorMsg); sqlite3_free(errorMsg); } } sqlite3_close(database); } So how do I modify the code to do what I want? It seemed like a simple sql statement change at first but obviously there must be more. I am new to Objective-C and iPhone programming. I am not new to using sql statements as I have been creating web apps in ASP for a number of years. Any help will be greatly appreciated, this is driving me nuts! Thanks in advance Dave

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99  | Next Page >