Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 196/210 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • How can this C and PHP programmer learn Ruby and Rails?

    - by Winston
    I came from a C, php and bash background, it was easy to learn because they all have the same C structure, which I can associate with what I already know. Then 2 years ago I learned Python and I learned it quite well, Python is easier for me to learn than Ruby. Then since last year, I was trying to learn Ruby, then Rails, and I admit, until now I still couldn't get it, the irony is that those are branded as easy to learn, but for a seasoned programmer like me, I just couldn't associate it with what I learned before, I have 2 books on both Ruby and Rails, and when I'm reading it nothing is absorbed into my mind, and I'm close to giving up... In ruby, I'm having a hard time grasping the concepts of blocks, and why there's @variables that can be accessed by other functions, and what does $variable and :variable do? And in Rails, why there's function like this_is_another_function_that_do_this, so thus ruby, is it just a naming convention or it's auto-generated with thisvariable _can_do_this_function. I'm still puzzled that where all those magic concepts and things came from? And now, 1 year of trying and absorbing, but still no progress... Edit: To summarize: How can I learn about blocks, and how can it be related to concepts from PHP/C? Variables, what does does it mean when a variable is prefixed with: @ $ : "Magic concepts", suchs as rails declarations of Records, what happens behind the scenes when I write has_one X OK so, bear with me with my confusion, at least I'm honest with myself, and it's over a year now since I first trying to learn ruby, and I'm not getting younger.. so I learned this in Bash/C/PHP solve_problem($problem) { if [ -e $problem == "trivial" ]; then write_solution(); else breakdown_problem_into_N_subproblems(\; define_relationship_between_subproblems; for i in $( command $each_subproblem ); do solve_problem $i done fi } write_solution(problem) { some_solution=$(command <parameters> "input" | command); command | command $some_solution > output_solved_problem_to_file } breakdown_problem_into_N_subproblems($problems) { for i in $problems; do command $i | command > i_can_output_a_file_right_away done } define_relationship_between_subproblems($problems) { if [ -e $problem == "relationship" ]; then relationship=$(command; command | command; command;) elsif [ -e $problem == "another_relationship" ]; relationship=$(command; command | command; command;) fi } In C/PHP is something like this solve_problem(problem) { if (problem == trivial) write_solution; else { breakdown_problem_into_N_subproblems; define_relationship_between_subproblems; for (each_subproblem) solve_problems(subproblem); } } And now, I just couldn't connect the dots with Ruby, |b|{ blocks }, using @variables, :variables, and variables_with_this_things..

    Read the article

  • Which credentials should I put in for Google App Engine BulkLoader at development server?

    - by Hoang Pham
    Hello everyone, I would like to ask which kind of credentials do I need to put on for importing data using the Google App Engine BulkLoader class appcfg.py upload_data --config_file=models.py --filename=listcountries.csv --kind=CMSCountry --url=http://localhost:8178/remote_api vit/ And then it asks me for credentials: Please enter login credentials for localhost Here is an extraction of the content of the models.py, I use this listcountries.csv file class CMSCountry(db.Model): sortorder = db.StringProperty() name = db.StringProperty(required=True) formalname = db.StringProperty() type = db.StringProperty() subtype = db.StringProperty() sovereignt = db.StringProperty() capital = db.StringProperty() currencycode = db.StringProperty() currencyname = db.StringProperty() telephonecode = db.StringProperty() lettercode = db.StringProperty() lettercode2 = db.StringProperty() number = db.StringProperty() countrycode = db.StringProperty() class CMSCountryLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'CMSCountry', [('sortorder', str), ('name', str), ('formalname', str), ('type', str), ('subtype', str), ('sovereignt', str), ('capital', str), ('currencycode', str), ('currencyname', str), ('telephonecode', str), ('lettercode', str), ('lettercode2', str), ('number', str), ('countrycode', str) ]) loaders = [CMSCountryLoader] Every tries to enter the email and password result in "Authentication Failed", so I could not import the data to the development server. I don't think that I have any problem with my files neither my models because I have successfully uploaded the data to the appspot.com application. So what should I put in for localhost credentials? I also tried to use Eclipse with Pydev but I still got the same message :( Here is the output: Uploading data records. [INFO ] Logging to bulkloader-log-20090820.121659 [INFO ] Opening database: bulkloader-progress-20090820.121659.sql3 [INFO ] [Thread-1] WorkerThread: started [INFO ] [Thread-2] WorkerThread: started [INFO ] [Thread-3] WorkerThread: started [INFO ] [Thread-4] WorkerThread: started [INFO ] [Thread-5] WorkerThread: started [INFO ] [Thread-6] WorkerThread: started [INFO ] [Thread-7] WorkerThread: started [INFO ] [Thread-8] WorkerThread: started [INFO ] [Thread-9] WorkerThread: started [INFO ] [Thread-10] WorkerThread: started Password for [email protected]: [DEBUG ] Configuring remote_api. url_path = /remote_api, servername = localhost:8178 [DEBUG ] Bulkloader using app_id: abc [INFO ] Connecting to /remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 2802, in Run request_manager.Authenticate() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 1126, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 488, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 381, in open response = self._open(req, data) File "C:\Python25\lib\urllib2.py", line 399, in _open '_open', req) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 1107, in http_open return self.do_open(httplib.HTTPConnection, req) File "C:\Python25\lib\urllib2.py", line 1082, in do_open raise URLError(err) URLError: <urlopen error (10061, 'Connection refused')> [INFO ] Authentication Failed Thank you!

    Read the article

  • stdio's remove() not always deleting on time.

    - by Kyte
    For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7. Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen(). FILE *archivo, *antiguo; remove("IndiceNecesidades.old"); // This randomly fails to work in time. rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails. antiguo = fopen("IndiceNecesidades.old", "rb"); // But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done). archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped. Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally. The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine. // I'm yet to see this snippet fail. FILE *antiguo, *archivo; remove("Necesidades.old"); rename("Necesidades.dat", "Necesidades.old"); antiguo = fopen("Necesidades.old", "rb"); archivo = fopen("Necesidades.dat", "wb"); Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

    Read the article

  • Problem getting ar_mailer/ar_sendmail working on new server

    - by Max Williams
    Hey all. I've got a new app up and running on a new ubuntu server. It's working fine generally but i can't get ar_sendmail working. I'm following the instructions on this page: http://www.ameravant.com/posts/sending-tons-of-emails-in-ruby-on-rails-with-ar_mailer The setup is all done, ie i can "deliver mails" which just saves records in my Email table. Now i want to get the ar_sendmail daemon running to actually send them. (so i'm at 'Running ar_sendmail in daemon mode' in that web page). First thing: ar_sendmail --mailq >>ar_sendmail: command not found Ok...so, where is ar_sendmail? I have a look and there's an ar_sendmail file in the bin folder of the ar_mailer plugin, so i add the location of that to my path. I don't know if this was the right thing to do or not. Ok, so try again. ar_sendmail --mailq /var/www/apps/millionaire/vendor/plugins/ar_mailer/bin/ar_sendmail:3:in `require': no such file to load -- action_mailer/ar_sendmail (LoadError) from /var/www/apps/millionaire/vendor/plugins/ar_mailer/bin/ar_sendmail:3 hmm. Here's the offending file, there's not much there. #!/usr/bin/env ruby require 'action_mailer/ar_sendmail' ActionMailer::ARSendmail.run ok...so it literally is just trying to require this and can't find it. The file, action_mailer/ar_sendmail.rb is in the ar_mailer plugin, in it's lib folder. So, given that it's being called from inside the plugin, it should be able to see this right? I've got a feeling that i'm way off the track here and have missed something simple. Can anyone set me straight? I'm using rails 2.3.4 in case that's relevant. EDIT - i just realised something kind of dumb: when i call ar_sendmail from the command line like this, i'm just loading that one file, which doesn't know where it's supposed to look for the rest of the stuff, i think. Which really makes me think that i'm not trying to run the right thing. Is the ar_sendmail daemon a seperate program altogether, that i would get with apt_get or something? EDIT2 - i made some progress by installing the ar_mailer gem (which the guide said i shouldn't do) and that does seem to run. It's sending some mail request somewhere and clearing the Email table of pending emails. Running ar_sendmail in -ov (oneshot verbal) mode i see it report this for example: sent email 00000000019 from [email protected] to [email protected]: # So, it actually looks like it's working now and i just need to set up the ACTUAL THING WHICH SENDS EMAILS. sigh. still grateful for any advice. thanks, max

    Read the article

  • Potential issues using member's "from" address and the "sender" header

    - by Paul Burney
    Hi all, A major component of our application sends email to members on behalf of other members. Currently we set the "From" address to our system address and use a "Reply-to" header with the member's address. The issue is that replies from some email clients (and auto-replies/bounces) don't respect the "Reply-to" header so get sent to our system address, effectively sending them to a black hole. We're considering setting the "From" address to our member's address, and the "Sender" address to our system address. It appears this way would pass SPF and Sender-ID checks. Are there any reasons not to switch to this method? Are there any other potential issues? Thanks in advance, -Paul Here are way more details than you probably need: When the application was first developed, we just changed the "from" address to be that of the sending member as that was the common practice at the time (this was many years ago). We later changed that to have the "from" address be the member's name and our address, i.e., From: "Mary Smith" <[email protected]> With a "reply-to" header set to the member's address: Reply-To: "Mary Smith" <[email protected]> This helped with messages being mis-categorized as spam. As SPF became more popular, we added an additional header that would work in conjunction with our SPF records: Sender: <[email protected]> Things work OK, but it turns out that, in practice, some email clients and most MTA's don't respect the "Reply-To" header. Because of this, many members send messages to [email protected] instead of the desired member. So, I started envisioning various schemes to add data about the sender to the email headers or encode it in the "from" email address so that we could process the response and redirect appropriately. For example, From: "Mary Smith" <[email protected]> where the string after "messages" is a hash representing Mary Smith's member in our system. Of course, that path could lead to a lot of pain as we need to develop MTA functionality for our system address. I was looking again at the SPF documentation and found this page interesting: http://www.openspf.org/Best_Practices/Webgenerated They show two examples, that of evite.com and that of egreetings.com. Basically, evite.com is doing it the way we're doing it. The egreetings.com example uses the member's from address with an added "Sender" header. So the question is, are there any potential issues with using the egreetings method of the member's from address with a sender header? That would eliminate the replies that bad clients send to the system address. I don't believe that it solves the bounce/vacation/whitelist issue since those often send to the MAIL FROM even if Return Path is specified.

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • Linked List Design

    - by Jim Scott
    The other day in a local .NET group I attend the following question came up: "Is it a valid interview question to ask about Linked Lists when hiring someone for a .NET development position?" Not having a computer sciense degree and being a self taught developer my response was that I did not feel it was appropriate as I in 5 years of developer with .NET had never been exposed to linked lists and did not hear any compeling reason for a use for one. However the person commented that it is a very common interview question so I decided when I left that I would do some reasearch on linked lists and see what I might be missing. I have read a number of posts on stack overflow and various google searches and decided the best way to learn about them was to write my own .NET classes to see how they worked from the inside out. Here is my class structure Single Linked List Constructor public SingleLinkedList(object value) Public Properties public bool IsTail public bool IsHead public object Value public int Index public int Count private fields not exposed to a property private SingleNode firstNode; private SingleNode lastNode; private SingleNode currentNode; Methods public void MoveToFirst() public void MoveToLast() public void Next() public void MoveTo(int index) public void Add(object value) public void InsertAt(int index, object value) public void Remove(object value) public void RemoveAt(int index) Questions I have: What are typical methods you would expect in a linked list? What is typical behaviour when adding new records? For example if I have 4 nodes and I am currently positioned in the second node and perform Add() should it be added after or before the current node? Or should it be added to the end of the list? Some of the designs I have seen explaining things seem to expose outside of the LinkedList class the Node object. In my design you simply add, get, remove values and know nothing about any node object. Should the Head and Tail be placeholder objects that are only used to define the head/tail of the list? I require my Linked List be instantiated with a value which creates the first node of the list which is essentially the head and tail of the list. Would you change that ? What should the rules be when it comes to removing nodes. Should someone be able to remove all nodes? Here is my Double Linked List Constructor public DoubleLinkedList(object value) Properties public bool IsHead public bool IsTail public object Value public int Index public int Count Private fields not exposed via property private DoubleNode currentNode; Methods public void AddFirst(object value) public void AddLast(object value) public void AddBefore(object existingValue, object value) public void AddAfter(object existingValue, object value) public void Add(int index, object value) public void Add(object value) public void Remove(int index) public void Next() public void Previous() public void MoveTo(int index)

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

  • PHP will not delete from MySQL

    - by Michal Kopanski
    For some reason, JavaScript/PHP wont delete my data from MySQL! Here is the rundown of the problem. I have an array that displays all my MySQL entries in a nice format, with a button to delete the entry for each one individually. It looks like this: <?php include("login.php"); //connection to the database $dbhandle = mysql_connect($hostname, $username, $password) or die("<br/><h1>Unable to connect to MySQL, please contact support at [email protected]</h1>"); //select a database to work with $selected = mysql_select_db($dbname, $dbhandle) or die("Could not select database."); //execute the SQL query and return records if (!$result = mysql_query("SELECT `id`, `url` FROM `videos`")) echo 'mysql error: '.mysql_error(); //fetch tha data from the database while ($row = mysql_fetch_array($result)) { ?> <div class="video"><a class="<?php echo $row{'id'}; ?>" href="http://www.youtube.com/watch?v=<?php echo $row{'url'}; ?>">http://www.youtube.com/watch?v=<?php echo $row{'url'}; ?></a><a class="del" href="javascript:confirmation(<? echo $row['id']; ?>)">delete</a></div> <?php } //close the connection mysql_close($dbhandle); ?> The delete button has an href of javascript:confirmation(<? echo $row['id']; ?>) , so once you click on delete, it runs this: <script type="text/javascript"> <!-- function confirmation(ID) { var answer = confirm("Are you sure you want to delete this video?") if (answer){ alert("Entry Deleted") window.location = "delete.php?id="+ID; } else{ alert("No action taken") } } //--> </script> The JavaScript should theoretically pass the 'ID' onto the page delete.php. That page looks like this (and I think this is where the problem is): <?php include ("login.php"); mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL"); mysql_select_db ($dbname) or die("Unable to connect to database"); mysql_query("DELETE FROM `videos` WHERE `videos`.`id` ='.$id.'"); echo ("Video has been deleted."); ?> If there's anyone out there that may know the answer to this, I would greatly appreciate it. I am also opened to suggestions (for those who aren't sure). Thanks!

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • Evaluating points in time by months, but without referencing years in Rails

    - by MikeH
    FYI, There is some overlap in the initial description of this question with a question I asked yesterday, but the question is different. My app has users who have seasonal products. When a user selects a product, we allow him to also select the product's season. We accomplish this by letting him select a start date and an end date for each product. We're using date_select to generate two sets of drop-downs: one for the start date and one for the end date. Including years doesn't make sense for our model. So we're using the option: discard_year => true When you use discard_year => true, Rails sets a year in the database, it just doesn't appear in the views. Rails sets all the years to either 0001 or 0002 in our app. Yes, we could make it 2009 and 2010 or any other pair. But the point is that we want the months and days to function independent of a particular year. If we used 2009 and 2010, then those dates would be wrong next year because we don't expect these records to be updated every year. My problem is that we need to dynamically evaluate the availability of products based on their relationship to the current month. For example, assume it's March 15. Regardless of the year, I need a method that can tell me that a product available from October to January is not available right now. If we were using actual years, this would be pretty easy. For example, in the products model, I can do this: def is_available? (season_start.past? && season_end.future?) end I can also evaluate a start_date and an end_date against current_date However, in setup I've described above where we have arbitrary years that only make sense relative to each other, these methods don't work. For example, is_available? would return false for all my products because their end date is in the year 0001 or 0002. What I need is a method just like the ones I used as examples above, except that they evaluate against current_month instead of current_date, and past? and future months instead of years. I have no idea how to do this or whether Rails has any built in functionality that could help. I've gone through all the date and time methods/helpers in the API docs, but I'm not seeing anything equivalent to what I'm describing. Thanks.

    Read the article

  • PayPal sandbox Buy Now Problem

    - by Tushar Ahirrao
    Hi , I have paypal sandbox test account. I want to create a 'buy Now' button. I am trying it with GWT. But its even not working with simple HTML form. It displays a 'Buy Now' button on HTML page and after clicking on it redirects to PayPal site. Where it ask to login to buy product but after that it goes on displying message: The email address or password you have entered does not match our records. Please try again. I am using buyer user to purchase product. I am pretty sure about the username and password. Providing here the simple HTML form which I am trying: <form action="https://www.paypal.com/cgi-bin/webscr" method="post" id="payPalForm"> <input type="hidden" name="item_number" value="1"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="no_note" value="1"> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="lc" value="US"> <input type="hidden" name="button_subtype" value="services"> <input type="hidden" name="cn" value="Add special instructions to the seller"> <input type="hidden" name="no_shipping" value="2"> <input type="hidden" name="rm" value="1"> <input type="hidden" name="bn" value="PP-BuyNowBF:btn_paynow_SM.gif:NonHosted"> <input type="hidden" name="variables" value="http://google.com"> <input type="hidden" name="cancel_return" value="http://google.com"> <input type="hidden" name="notify_url" value="http://google.com"> <input type="hidden" name="return" value="http://freelanceswitch.com/payment-complete /"> <input type="hidden" name="currency_code" value="USD"> <input name="item_name" type="hidden" value="Deal Name"> <input name="amount" type="hidden" value="500"> <input type="submit" name="Submit" value="Submit"> </form> Please advice. Thank you.

    Read the article

  • How to do an fetch request with expressions like this on the iPhone?

    - by dontWatchMyProfile
    The documentation has an example on how to retrieve simple values only, rather than managed objects. This remembers a lot SQL using aliases and functions to only retrieve calculated values. So, actually pretty geeky stuff. To get the minimum date from a bunch of records, this is used on the mac: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:context]; [request setEntity:entity]; // Specify that the request should return dictionaries. [request setResultType:NSDictionaryResultType]; // Create an expression for the key path. NSExpression *keyPathExpression = [NSExpression expressionForKeyPath:@"creationDate"]; // Create an expression to represent the minimum value at the key path 'creationDate' NSExpression *minExpression = [NSExpression expressionForFunction:@"min:" arguments:[NSArray arrayWithObject:keyPathExpression]]; // Create an expression description using the minExpression and returning a date. NSExpressionDescription *expressionDescription = [[NSExpressionDescription alloc] init]; // The name is the key that will be used in the dictionary for the return value. [expressionDescription setName:@"minDate"]; [expressionDescription setExpression:minExpression]; [expressionDescription setExpressionResultType:NSDateAttributeType]; // Set the request's properties to fetch just the property represented by the expressions. [request setPropertiesToFetch:[NSArray arrayWithObject:expressionDescription]]; // Execute the fetch. NSError *error; NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // Handle the error. } else { if ([objects count] > 0) { NSLog(@"Minimum date: %@", [[objects objectAtIndex:0] valueForKey:@"minDate"]; } } [expressionDescription release]; [request release]; Nice, I though - but having a deep look into NSExpression -expressionForFunction:arguments: it turns out that iPhone OS does NOT support the min: function. Well, probably there's a nifty way to use an own function for this kind of stuff on the iPhone as well? Because on thing I'm already worrying about is, how I'm gonna sort a table based on the calculated distance of targets on a map (location-based stuff).

    Read the article

  • Reduce Multiple Errors logging in sysssislog

    - by Akshay
    Need help. I am trying to automate error notifications to be sent in mailers. For that I am querying the sysssislog table. I have pasted an "Execute SQl task" on the package event handler "On error". For testing purpose, I am deliberately trying to load duplicate keys in a table which consists of a Primary key column(so as to get an error). But instead of having just one error, "Violation of primary key constraint", SSIS records 3 in the table. PFA the screenshot as well. How can i restrict the tool to log only one error and not multiple ??? Package Structure. Package ("On error Event handler") - DFT - Oledb Source - Oledb Destination SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "The statement has been terminated.". An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint 'PK_SalesPerson_SalesPersonID'. Cannot insert duplicate key in object 'dbo.SalesPerson'.". SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "input "OLE DB Destination Input" (56)" failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE DB Destination Input" (56)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "OLE DB Destination" (43) failed with error code 0xC0209029 while processing input "OLE DB Destination Input" (56). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure. Please guide me. Your help is very much appreciated. Thanks

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • Fread binary file dynamic size string [C]

    - by Blackbinary
    I've been working on this assignment, where I need to read in "records" and write them to a file, and then have the ability to read/find them later. On each run of the program, the user can decide to write a new record, or read an old record (either by Name or #) The file is binary, here is its definition: typedef struct{ char * name; char * address; short addressLength, nameLength; int phoneNumber; }employeeRecord; employeeRecord record; The way the program works, it will store the structure, then the name, then the address. Name and address are dynamically allocated, which is why it is necessary to read the structure first to find the size of the name and address, allocate memory for them, then read them into that memory. For debugging purposes I have two programs at the moment. I have my file writing program, and file reading. My actual problem is this, when I read a file I have written, i read in the structure, print out the phone # to make sure it works (which works fine), and then fread the name (now being able to use record.nameLength which reports the proper value too). Fread however, does not return a usable name, it returns blank. I see two problems, either I haven't written the name to the file correctly, or I haven't read it in correctly. Here is how i write to the file: where fp is the file pointer. record.name is a proper value, so is record.nameLength. Also i am writing the name including the null terminator. (e.g. 'Jack\0') fwrite(&record,sizeof record,1,fp); fwrite(record.name,sizeof(char),record.nameLength,fp); fwrite(record.address,sizeof(char),record.addressLength,fp); And i then close the file. here is how i read the file: fp = fopen("employeeRecord","r"); fread(&record,sizeof record,1,fp); printf("Number: %d\n",record.phoneNumber); char *nameString = malloc(sizeof(char)*record.nameLength); printf("\nName Length: %d",record.nameLength); fread(nameString,sizeof(char),record.nameLength,fp); printf("\nName: %s",nameString); Notice there is some debug stuff in there (name length and number, both of which are correct). So i know the file opened properly, and I can use the name length fine. Why then is my output blank, or a newline, or something like that? (The output is just Name: with nothing after it, and program finishes just fine) Thanks for the help.

    Read the article

  • Searches (and general querying) with HBase and/or Cassandra (best practices?)

    - by alexeypro
    I have User model object with quite few fields (properties, if you wish) in it. Say "firstname", "lastname", "city" and "year-of-birth". Each user also gets "unique id". I want to be able to search by them. How do I do that properly? How to do that at all? My understanding (will work for pretty much any key-value storage -- first goes key, then value) u:123456789 = serialized_json_object ("u" as a simple prefix for user's keys, 123456789 is "unique id"). Now, thinking that I want to be able to search by firstname and lastname, I can save in: f:Steve = u:384734807,u:2398248764,u:23276263 f:Alex = u:12324355,u:121324334 so key is "f" - which is prefix for firstnames, and "Steve" is actual firstname. For "u:Steve" we save as value all user id's who are "Steve's". That makes every search very-very easy. Querying by few fields (properties) -- say by firstname (i.e. "Steve") and lastname (i.e. "l:Anything") is still easy - first get list of user ids from "f:Steve", then list from "l:Anything", find crossing user ids, an here you go. Problems (and there are quite a few): Saving, updating, deleting user is a pain. It has to be atomic and consistent operation. Also, if we have size of value limited to some value - then we are in (potential) trouble. And really not of an answer here. Only zipping the list of user ids? Not too cool, though. What id we want to add new field to search by. Eventually. Say by "city". We certainly can do the same way "c:Los Angeles" = ..., "c:Chicago" = ..., but if we didn't foresee all those "search choices" from the very beginning, then we will have to be able to create some night job or something to go by all existing User records and update those "c:CITY" for them... Quite a big job! Problems with locking. User "u:123" updates his name "Alex", and user "u:456" updates his name "Alex". They both have to update "f:Alex" with their id's. That means either we get into overwriting problem, or one update will wait for another (and imaging if there are many of them?!). What's the best way of doing that? Keeping in mind that I want to search by many fields? P.S. Please, the question is about HBase/Cassandra/NoSQL/Key-Value storages. Please please - no advices to use MySQL and "read about" SELECTs; and worry about scaling problems "later". There is a reason why I asked MY question exactly the way I did. :-)

    Read the article

  • rails named_scope ignores eager loading

    - by Craig
    Two models (Rails 2.3.8): User; username & disabled properties; User has_one :profile Profile; full_name & hidden properties I am trying to create a named_scope that eliminate the disabled=1 and hidden=1 User-Profiles. The User model is usually used in conjunction with the Profile model, so I attempt to eager-load the Profile model (:include = :profile). I created a named_scope on the User model called 'visible': named_scope :visible, { :joins => "INNER JOIN profiles ON users.id=profiles.user_id", :conditions => ["users.disabled = ? AND profiles.hidden = ?", false, false] } I've noticed that when I use the named_scope in a query, the eager-loading instruction is ignored. Variation 1 - User model only: # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <% end %> # generates a single query: SELECT * FROM `users` Variation 2 - use Profile model in view; lazy load Profile model # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <p><%= user.profile.full_name %></p> <% end %> # generates multiple queries: SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SHOW FIELDS FROM `profiles` SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 5) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 6) ORDER BY full_name ASC LIMIT 1 Variation 3 - eager load Profile model # UserController @users = User.find(:all, :include => :profile) #view; no changes # two queries SELECT * FROM `users` SELECT `profiles`.* FROM `profiles` WHERE (`profiles`.user_id IN (1,2,3,4,5,6)) Variation 4 - use name_scope, including eager-loading instruction #UserConroller @users = User.visible(:include => :profile) #view; no changes # generates multiple queries SELECT `users`.* FROM `users` INNER JOIN profiles ON users.id=profiles.user_id WHERE (users.disabled = 0 AND profiles.hidden = 0) SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 Variation 4 does return the correct number of records, but also appears to be ignoring the eager-loading instruction. Is this an issue with cross-model named scopes? Perhaps I'm not using it correctly. Is this sort of situation handled better by Rails 3?

    Read the article

  • Classifying captured data in unknown format?

    - by monch1962
    I've got a large set of captured data (potentially hundreds of thousands of records), and I need to be able to break it down so I can both classify it and also produce "typical" data myself. Let me explain further... If I have the following strings of data: 132T339G1P112S 164T897F5A498S 144T989B9B223T 155T928X9Z554T ... you might start to infer the following: possibly all strings are 14 characters long the 4th, 8th, 10th and 14th characters may always be alphas, while the rest are numeric the first character may always be a '1' the 4th character may always be the letter 'T' the 14th character may be limited to only being 'S' or 'T' and so on... As you get more and more samples of real data, some of these "rules" might disappear; if you see a 15 character long string, then you have evidence that the 1st "rule" is incorrect. However, given a sufficiently large sample of strings that are exactly 14 characters long, you can start to assume that "all strings are 14 characters long" and assign a numeric figure to your degree of confidence (with an appropriate set of assumptions around the fact that you're seeing a suitably random set of all possible captured data). As you can probably tell, a human can do a lot of this classification by eye, but I'm not aware of libraries or algorithms that would allow a computer to do it. Given a set of captured data (significantly more complex than the above...), are there libraries that I can apply in my code to do this sort of classification for me, that will identify "rules" with a given degree of confidence? As a next step, I need to be able to take those rules, and use them to create my own data that conforms to these rules. I assume this is a significantly easier step than the classification, but I've never had to perform a task like this before so I'm really not sure how complex it is. At a guess, Python or Java (or possibly Perl or R) are possibly the "common" languages most likely to have these sorts of libraries, and maybe some of the bioinformatic libraries do this sort of thing. I really don't care which language I have to use; I need to solve the problem in whatever way I can. Any sort of pointer to information would be very useful. As you can probably tell, I'm struggling to describe this problem clearly, and there may be a set of appropriate keywords I can plug into Google that will point me towards the solution. Thanks in advance

    Read the article

  • No output from Linq to XML

    - by Gogster
    Hi all, I have the following code: protected void Page_Load(object sender, EventArgs e) { XElement xml = XElement.Load(Server.MapPath("ArenasMembers.xml")); var query = from p in xml.Descendants("members") select new { Name = p.Element("name").Value, Email = p.Attribute("email").Value }; foreach (var member in query) { Response.Write("Employee: " + member.Name + " " + member.Email + "<br />"); } } Which, using the hover information in Visual Studio, is reading the XNL file in correctly, however the foreach is not outputting any of the records. XML: <?xml version="1.0" encoding="utf-8" ?> <members> <member> <arena>EAA Office</arena> <memberid>1</memberid> <name>Jane Doe</name> <email>[email protected]</email> </member> <member> <arena>EAA Office</arena> <memberid>2</memberid> <name>John Bull</name> <email>[email protected]</email> </member> <member> <arena>O2 Arena</arena> <memberid>3</memberid> <name>John Doe</name> <email>[email protected]</email> </member> <member> <arena>O2 Arena</arena> <memberid>4</memberid> <name>Bernard Cribbins</name> <email>[email protected]</email> </member> <member> <arena>Colourline Arena</arena> <memberid>5</memberid> <name>John Bon Jovi</name> <email>[email protected]</email> </member> <member> <arena>NIA</arena> <memberid>6</memberid> <name>Rhianna</name> <email>[email protected]</email> </member> </members> Can you see what is wrong?

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • How to get Alfresco login ticket without user password, but with impersonating user with user principal name (UPN)

    - by dok
    I'm writing a DLL that has function for getting Alfresco login ticket without using user password, using only a user principal name (UPN). I’m calling alfresco REST API service /wcservice. I use NTLM in Alfresco. I’m impersonating users using WindowsIdentity constructor as explained here http://msdn.microsoft.com/en-us/library/ms998351.aspx#paght000023_impersonatingbyusingwindowsidentity. I checked and user is properly impersonated (I checked WindowsIdentity.GetCurrent().Name property). After impersonating a user, I try to make HttpWebRequest and set its credentials with CredentialsCache.DefaultNetworkCredentials. I get the error: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() When I use new NetworkCredential("username", "P@ssw0rd") to set request credentials, I get Alfresco login ticket (HttpStatusCode.OK, 200). Is there any way that I can get Alfresco login ticket without user password? Here is the code that I'm using: private string GetTicket(string UPN) { WindowsIdentity identity = new WindowsIdentity(UPN); WindowsImpersonationContext context = null; try { context = identity.Impersonate(); MakeWebRequest(); } catch (Exception e) { return e.Message + Environment.NewLine + e.StackTrace; } finally { if (context != null) { context.Undo(); } } } private string MakeWebRequest() { string URI = "http://alfrescoserver/alfresco/wcservice/mg/util/login"; HttpWebRequest request = WebRequest.Create(URI) as HttpWebRequest; request.CookieContainer = new CookieContainer(1); //request.Credentials = new NetworkCredential("username", "p@ssw0rd"); // It works with this request.Credentials = CredentialCache.DefaultNetworkCredentials; // It doesn’t work with this //request.Credentials = CredentialCache.DefaultCredentials; // It doesn’t work with this either try { using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { StreamReader sr = new StreamReader(response.GetResponseStream()); return sr.ReadToEnd(); } } catch (Exception e) { return (e.Message + Environment.NewLine + e.StackTrace); } } Here are records from Alfresco stdout.log (if it helps in any way): 17:18:04,550 DEBUG [app.servlet.NTLMAuthenticationFilter] Processing request: /alfresco/wcservice/mg/util/login SID:7453F7BD4FD2E6A61AD40A31A37733A5 17:18:04,550 DEBUG [web.scripts.DeclarativeRegistry] Web Script index lookup for uri /mg/util/login took 0.526239ms 17:18:04,550 DEBUG [app.servlet.NTLMAuthenticationFilter] New NTLM auth request from 10.**.**.** (10.**.**.**:1229) 17:18:04,566 DEBUG [app.servlet.NTLMAuthenticationFilter] Processing request: /alfresco/wcservice/mg/util/login SID:7453F7BD4FD2E6A61AD40A31A37733A5 17:18:04,566 DEBUG [web.scripts.DeclarativeRegistry] Web Script index lookup for uri /mg/util/login took 0.400909ms 17:18:04,566 DEBUG [app.servlet.NTLMAuthenticationFilter] Received type1 [Type1:0xe20882b7,Domain:<NotSet>,Wks:<NotSet>] 17:18:04,566 DEBUG [app.servlet.NTLMAuthenticationFilter] Client domain null 17:18:04,675 DEBUG [app.servlet.NTLMAuthenticationFilter] Sending NTLM type2 to client - [Type2:0x80000283,Target:AlfrescoServerA,Ch:197e2631cc3f9e0a]

    Read the article

  • How to add an XML parameter to a stored procedure in C#?

    - by salvationishere
    I am developing a C# web application in VS 2008 which interacts with my Adventureworks database in my SQL Server 2008. Now I am trying to add new records to one of the tables which has an XML column in it. How do I do this? This is the error I'm getting: System.Data.SqlClient.SqlException was caught Message="XML Validation: Text node is not allowed at this location, the type was defined with element only content or with simple content. Location: /" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=16 LineNumber=22 Number=6909 Procedure="AppendDataC" Server="." State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at ADONET_namespace.ADONET_methods.AppendDataC(DataRow d, Hashtable ht) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 212 InnerException: And this is a portion of my code in C#: try { SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataC"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; ... sqlParam10.SqlDbType = SqlDbType.VarChar; SqlParameter sqlParam11 = cmd.Parameters.AddWithValue("@" + ht["@col11"], d[10]); sqlParam11.SqlDbType = SqlDbType.VarChar; SqlParameter sqlParam12 = cmd.Parameters.AddWithValue("@" + ht["@col12"], d[11]); sqlParam12.SqlDbType = SqlDbType.Xml; ... conn2.Open(); cmd.ExecuteNonQuery(); //This is the line it fails on and then jumps //to the Catch statement conn2.Close(); errorMsg = "The Person.Contact table was successfully updated!"; } catch (Exception ex) { Right now in my text input MDF file I have the XML parameter as: '<Products><id>3</id><id>6</id><id>15</id></Products>' Is this valid format for XML?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >