Search Results

Search found 27890 results on 1116 pages for 'oracle retail documentation team'.

Page 714/1116 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Is there added overhead to looking up a column in a DataTable by name rather than by index?

    - by Ben McCormack
    In a DataTable object, is there added overhead to looking up a column value by name thisRow("ColumnA") rather than by the column index thisRow(0)? In which scenarios might this be an issue. I work on a team that has lots of experience writing VB6 code and I noticed that didn't do column lookups by name for DataTable objects or data grids. Even in .NET code, we use a set of integer constants to reference column names in these types of objects. I asked our team lead why this was so, and he mentioned that in VB6, there was a lot of overhead in looking up data by column name rather than by index. Is this still true for .NET? Example code (in VB.NET, but same applies to C#): Public Sub TestADOData() Dim dt As New DataTable 'Set up the columns in the DataTable ' dt.Columns.Add(New DataColumn("ID", GetType(Integer))) dt.Columns.Add(New DataColumn("Name", GetType(String))) dt.Columns.Add(New DataColumn("Description", GetType(String))) 'Add some data to the data table ' dt.Rows.Add(1, "Fred", "Pitcher") dt.Rows.Add(3, "Hank", "Center Field") 'Method 1: By Column Name ' For Each r As DataRow In dt.Rows Console.WriteLine( _ "{0,-2} {1,-10} {2,-30}", r("ID"), r("Name"), r("Description")) Next Console.WriteLine() 'Method 2: By Column Name ' For Each r As DataRow In dt.Rows Console.WriteLine("{0,-2} {1,-10} {2,-30}", r(0), r(1), r(2)) Next End Sub Is there an case where method 2 provides a performance advantage over method 1?

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Can't get SFTP to work in PHP

    - by Chris Kloberdanz
    I am writing a simple SFTP client in PHP because we have the need to programatically retrieve files via n remote servers. I am using the PECL SSH2 extension. I have run up against a road block, though. The documentation on php.net suggests that you can do this: $stream = fopen("ssh2.sftp://$sftp/path/to/file", 'r'); However, I have an ls method that attempts to something similar public function ls($dir) { $rd = "ssh2.sftp://{$this->sftp}/$dir"; $handle = opendir($rd); if (!is_resource($handle)) { throw new SFTPException("Could not open directory."); } while (false !== ($file = readdir($handle))) { if (substr($file, 0, 1) != '.'){ print $file . "\n"; } } closedir($handle); } I get the following error: PHP Warning: opendir(): Unable to open ssh2.sftp://Resource id #5/outgoing on remote host This makes perfect sense because that's what happens when you cast a resource to string. Is the documentation wrong? I tried replacing the resource with host, username, and host and that didn't work either. I know the path is correct because I can run SFTP from the command line and it works fine. Has anyone else tried to use the SSH2 extenstion with SFTP? Am I missing something obvious here? UPDATE: I setup sftp on another machine in-house and it works just fine. So, there must be something about the server I am trying to connect to that isn't working.

    Read the article

  • .NET: What's the difference between HttpMethod and RequestType of HttpRequest?

    - by Ian Boyd
    The HttpRequest class defines two properties: HttpMethod: Gets the HTTP data transfer method (such as GET, POST, or HEAD) used by the client. public string HttpMethod { get; } The HTTP data transfer method used by the client. and RequestType: Gets or sets the HTTP data transfer method (GET or POST) used by the client. public string RequestType { get; set; } A string representing the HTTP invocation type sent by the client. What is the difference between these two properties? When would i want to use one over the other? Which is the proper one to inspect to see what data transfer method was used by the client? The documentation indicates that HttpMethod will return whatever verb was used: such as GET, POST, or HEAD while the documentation on RequestType seems to indicate only one of two possible values: GET or POST i test with a random sampling of verbs, and both properties seem to support all verbs, and both return the same values: Testing: Client Used HttpMethod RequestType GET GET GET POST POST POST HEAD HEAD HEAD CONNECT CONNECT CONNECT MKCOL MKCOL MKCOL PUT PUT PUT FOOTEST FOOTEST FOOTEST What is the difference between: HttpRequest.HttpMethod HttpRequest.RequestType and when should i use one over the other? Keywords: iis asp.net http httprequest httphandler

    Read the article

  • Using SVN with a MySQL database ran by xamp - yes or no? (and how?)

    - by Extrakun
    For my current PHP/MySQL project (over a group of 4 to 5 team members), we are using this setup: each developer codes and test on his localhost running xamp, and upload to a test server via SVN. One question that I have now is how to synchronize the MySQL database? I may have added a new table to project and the PHP code references to it, so my other team members would need to access that table for my code (once they got it through SVN) to work. We are not always working in the same office all the time, so having a LAN and a MySQL server in the office is not feasible. So I am toying with 2 solutions Setup a test DB online, and have all the coders will reference to that, even when coding from localhost. Downside: you can't test if you happen not have internet access. Somehow sync the localhost copy of MySQL DB. Is that kind of silly? And if I do consider this, how do I do it? (which folder do I add to SVN?) (I guess a related question is how to automatically update the live MySQL DB from the testing DB, regardless if it is on a remote server or hosted locally via xamp. Any advice regarding that would be welcomed!)

    Read the article

  • pass objective c object and primitive type into a void *

    - by user674669
    I want to pass 2 variables: UIImage * img int i into another method that only takes a (void *) I tried making a C struct containing both img and i struct MyStruct { UIImage *img; int i; } but xcode gives me an error saying "ARC forbids Objective-C objects in structs or unions" The next thing I tried is to write an objective-c class MyStruct2 containing img and i, alloc-initing an instance of it and typecasting it as (__bridge void*) before passing it to the method. Seems little involved for my use case. Seems like there should be a better way. What's the simplest way to achieve this? Thank you. Edit based on comments: I have to use void * as it is required by the UIView API. I created a selector as mentioned by UIVIew API + (void)setAnimationDidStopSelector:(SEL)selector Please see documentation for setAnimationDidStopSelector at http://developer.apple.com/library/ios/#documentation/uikit/reference/UIView_Class/UIView/UIView.html . It says ... The selector should be of the form: - (void)animationDidStop:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context I want to pass both img and i into the (void *)context argument.

    Read the article

  • How do I use Perl's WWW::Facebook::API to publish to a user's newsfeed?

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • Accessing an iFrame's dom from a Firefox Extension

    - by luisgo
    Hi all, I've spent a long time trying different things to get this to work but nothing does and documentation is not helping much. I'm trying to populate a form inside an iframe that I dynamically inject into a page. To inject that iframe I do: myObject.iframe = document.createElement("iframe"); myObject.iframe.setAttribute("src", data.url); myObject.iframe.setAttribute("id","extension-iframe"); myObject.window.document.getElementById('publisher').appendChild(myObject.iframe); myObject.iframe.addEventListener("load", function(){ myObject.populate(data); }, false); which works fine and DOES trigger the populate() method. My problem is getting the document or window objects for that iframe. I've tried all of the following: myObject.iframe.window myObject.iframe.document myObject.iframe.content but these are all undefined. I also tried passing the event object to that iframe's load event listener and then doing: event.originalTarget But that didn't work either. I am reading the following documentation: https://developer.mozilla.org/En/Working%5Fwith%5Fwindows%5Fin%5Fchrome%5Fcode https://developer.mozilla.org/en/Code%5Fsnippets/Interaction%5Fbetween%5Fprivileged%5Fand%5Fnon-privileged%5Fpages But either I'm not understanding it or it's just not properly explained. Can anyone shed some light? Thanks!

    Read the article

  • Libraries for developing NCPDP SCRIPT based systems (a standard for e-prescribing)

    - by Kaveh Shahbazian
    What are (based on experiences) best (commercial or open source) libraries for developing NCPDP-based systems? Background: NCPDP (National Council for Prescription Drug Programs) is a not-for-profit, ANSI-accredited, standards development organization. One of it's standards is the SCRIPT Standard for Electronic Prescribing, which allows PHARMACY, PRESCRIBER (i.e. Physician) and PAYERS (patient or more often insurer) communicate. So the SCRIPT standard is about data transmission. Problem: One step in implementing such systems is to develop models for data based on SCRIPT standard. These models should have utilities for serializing/deserializing to/from SCRIPT binary format and SCRIPT XML format (there are two distinct formats here; both must be supported). Here rises the problem (for me at least). To develop this subsystem for handling the model, implementing serializing and deserializing facilities and keep it uptodate with the SCRIPT standard specifications is a lot of work; it needs it's own team and team management issues (to support a standard implementation). So I am looking for a solution to this problem; to keep standard implementation out of the way and focusing on main problems. Thanks to all (Thankyou Freiheit for your hints!) Edit 2: Thanks to all for help! NCPDP (National Council for Prescription Drug Programs) is an standard for e-prescribing. It defines two formats for message transmission: binary and XML. Implementing XML is somehow easier because it is a standard format which in turn gives us more tooling options. The binary format has a very big specification and time-consuming to implement. I did not find an open source solution to work with. So I am looking for commercial alternatives. Edit 1: Please guide me; what's wrong with this question?

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

  • Getting an Android App to Show Up in the market for "Sony Internet TV"(Google TV)

    - by user1291659
    I'm having a bit of trouble getting my app to show up in the market under GoogleTV. I've searched google's official documentation and I don't believe the manifest lists any elements which would invalidate the program; the only hardware requirement specified is landscape mode, wakelock and external storage(neither which should cause it to be filtered for GTV according to the documentation) and I set the uses touchscreen elements "required" attribute to false. below is the AndroidManifest.xml for my project: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.whateversoft" android:versionCode="2" android:versionName="0.1" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="Color Shafted" android:theme="@style/Theme.NoBackground" android:debuggable="false"> <activity android:label="Color Shafted" android:name=".colorshafted.ColorShafted" android:configChanges = "keyboard|keyboardHidden|orientation" android:screenOrientation = "landscape"> <!-- Set as the default run activity --> <intent-filter > <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:label="Color Shafted Settings" android:name=".colorshafted.Settings" android:theme="@android:style/Theme" android:configChanges = "keyboard|keyboardHidden"> <!-- --> </activity> </application> <!-- DEFINE PERMISSIONS FOR CAPABILITIES --> <uses-permission android:name = "android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name = "android.permission.WAKE_LOCK"/> <uses-feature android:name="android.hardware.touchscreen" android:required="false" /> <!-- END OF PERMISSIONS FOR CAPABILITIES --> </manifest> I'm about to start promoting the app after the next major release so its been kind of a bummer since I can't seem to get this to work. Any help would be appreciated, thanks in advance : )

    Read the article

  • Passing an TAdoDataset as parameter using TADOStoredProc

    - by Salvador
    i have an oracle stored procedure with 2 parameters declarated as input cursors. how i can assign this parameters the TADOStoredProc component? ORACLE PROCEDURE MYSTOREDPROCEDURE(P_HEADER IN TCursor, P_DETAL IN TCursor, P_RESULT OUT VARCHAR2) BEGIN //My code goes here END; Delphi function TMyClass.Add(Header, Detail: TADODataSet;var _Result: string): boolean; Var StoredProc : TADOStoredProc; begin Result:=False; StoredProc:=TADOStoredProc.Create(nil); try StoredProc.Connection :=ADOConnection1; StoredProc.ProcedureName:='MYSTOREDPROCEDURE'; StoredProc.Parameters.Refresh; StoredProc.Parameters.ParamByName('P_HEADER').Value :=Header;//How can assign this parameter? try StoredProc.Open; _Result:=VarToStrNull(StoredProc.Parameters.ParamByName('P_RESULT').Value); StoredProc.Close; Result:=True; except on E : Exception do begin _Result:=E.Message; //exit; end; end; finally StoredProc.Free; end; end;

    Read the article

  • How do I add a header with data to a QTableWidget in Qt?

    - by San Jacinto
    Hi, I'm still learning Qt and I am indebted to the SO community for providing me with great, very timely answers to my Qt questions. Thank you. I'm quite confused on the idea of adding a header to a QTableWidget. What I'd like to do is have a table that contains information about team members. Each row for a member should contain his first and last name, each in its own cell, an email address in one cell, and office in the other cell. I'd to have a header above these columns to name them as appropriate. I'm trying to start off easy and get just the header to display "Last" (as in last name). Here is my code. int column = m_ui-teamTableWidget-columnCount(); m_ui-teamTableWidget-setColumnCount(column+1); QString* qq = new QString("Last"); m_ui-teamTableWidget-horizontalHeader()-model()-setHeaderData(0, Qt::Horizontal, QVariant(QVariant::String, &qq)); My table gets rendered corretly, but the header doesn't contain what I would expect. It contains 1 cell that contains the text "1". I am obviously doing something very silly here that is wrong, but i am lost. I keep pouring over the documentation, finding nothing. Here are the documentation links to the function calls I am making for the very last line. http://doc.trolltech.com/4.5/qtableview.html#horizontalHeader http://doc.trolltech.com/4.5/qabstractitemview.html#model http://doc.trolltech.com/4.5/qabstractitemmodel.html#setHeaderData Thanks for any and all help. Edit: HOW I SOLVED THE PROBLEM Using some help from the accepted answer, I came up with the following code: m_ui-teamTableWidget-setColumnCount(m_ui-teamTableWidget-columnCount()+1); QTableWidgetItem* qtwi = new QTableWidgetItem(QString("Last"),QTableWidgetItem::Type); m_ui-teamTableWidget-setHorizontalHeaderItem(0,qtwi);

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • Java remove HTML from String without regular expressions

    - by behrk2
    Hello, I am trying to remove all HTML elements from a String. Unfortunately, I cannot use regular expressions because I am developing on the Blackberry platform and regular expressions are not yet supported. Is there any other way that I can remove HTML from a string? I read somewhere that you can use a DOM Parser, but I couldn't find much on it. Text with HTML: <![CDATA[As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (<a href="http://www.netflix.com/RoleDisplay/Billy_Bob_Thornton/20000303">Billy Bob Thornton</a>) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (<a href="http://www.netflix.com/RoleDisplay/Bruce_Willis/99786">Bruce Willis</a>) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task. <a href="http://www.netflix.com/RoleDisplay/Ben_Affleck/20000016">Ben Affleck</a> and <a href="http://www.netflix.com/RoleDisplay/Liv_Tyler/162745">Liv Tyler</a> co-star.]]> Text without HTML: As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (Billy Bob Thornton) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (Bruce Willis) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task.Ben Affleck and Liv Tyler co-star. Thanks!

    Read the article

  • Can I perform some processing on the POST data before ASP.NET MVC UpdateModel happens?

    - by Domenic
    I would like to strip out non-numeric elements from the POST data before using UpdateModel to update the copy in the database. Is there a way to do this? // TODO: it appears I don't even use the parameter given at all, and all the magic // happens via UpdateModel and the "controller's current value provider"? [HttpPost] public ActionResult Index([Bind(Include="X1, X2")] Team model) // TODO: stupid magic strings { if (this.ModelState.IsValid) { TeamContainer context = new TeamContainer(); Team thisTeam = context.Teams.Single(t => t.TeamId == this.CurrentTeamId); // TODO HERE: apply StripWhitespace() to the data before using UpdateModel. // The data is currently somewhere in the "current value provider"? this.UpdateModel(thisTeam); context.SaveChanges(); this.RedirectToAction(c => c.Index()); } else { this.ModelState.AddModelError("", "Please enter two valid Xs."); } // If we got this far, something failed; redisplay the form. return this.View(model); } Sorry for the terseness, up all night working on this; hopefully my question is clear enough? Also sorry since this is kind of a newbie question that I might be able to get with a few hours of documentation-trawling, but I'm time-pressured... bleh.

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • What is default javac source mode (assert as identifier compilation)?

    - by waste
    According to Orcale's Java7 assert guide: source mode 1.3 (default) — the compiler accepts programs that use assert as an identifier, but issues warnings. In this mode, programs are not permitted to use the assert statement. source mode 1.4 — the compiler generates an error message if the program uses assert as an identifier. In this mode, programs are permitted to use the assert statement. I wrote such class: package mm; public class ClassTest { public static void main(String[] arg) { int assert = 1; System.out.println(assert); } } It should compile fine if Oracle's info right (1.3 is default source mode). But I got errors like this: $ javac -version javac 1.7.0_04 $ javac -d bin src/mm/* src\mm\ClassTest.java:5: error: as of release 1.4, 'assert' is a keyword, and may not be used as an identifier int assert = 1; ^ (use -source 1.3 or lower to use 'assert' as an identifier) src\mm\ClassTest.java:6: error: as of release 1.4, 'assert' is a keyword, and may not be used as an identifier System.out.println(assert); ^ (use -source 1.3 or lower to use 'assert' as an identifier) 2 errors I added manually -source 1.3 and it issued warnings but compiled fine. It seems that Oracle's information is wrong and 1.3 is not default source mode. Which one is it then?

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • Question about the Cloneable interface and the exception that should be thrown

    - by Nazgulled
    Hi, The Java documentation says: A class implements the Cloneable interface to indicate to the Object.clone() method that it is legal for that method to make a field-for-field copy of instances of that class. Invoking Object's clone method on an instance that does not implement the Cloneable interface results in the exception CloneNotSupportedException being thrown. By convention, classes that implement this interface should override Object.clone (which is protected) with a public method. See Object.clone() for details on overriding this method. Note that this interface does not contain the clone method. Therefore, it is not possible to clone an object merely by virtue of the fact that it implements this interface. Even if the clone method is invoked reflectively, there is no guarantee that it will succeed. And I have this UserProfile class: public class UserProfile implements Cloneable { private String name; private int ssn; private String address; public UserProfile(String name, int ssn, String address) { this.name = name; this.ssn = ssn; this.address = address; } public UserProfile(UserProfile user) { this.name = user.getName(); this.ssn = user.getSSN(); this.address = user.getAddress(); } // get methods here... @Override public UserProfile clone() { return new UserProfile(this); } } And for testing porpuses, I do this in main(): UserProfile up1 = new UserProfile("User", 123, "Street"); UserProfile up2 = up1.clone(); So far, no problems compiling/running. Now, per my understanding of the documentation, removing implements Cloneable from the UserProfile class should throw an exception in up1.clone() call, but it doesn't. I've read around here that the Cloneable interface is broken but I don't really know what that means. Am I missing something?

    Read the article

  • Running bundle install fails trying to remote fetch from rubygems.org/quick/Marshal...

    - by dreeves
    I'm getting a strange error when doing bundle install: $ bundle install Fetching source index for http://rubygems.org/ rvm/rubies/ree-1.8.7-2010.02/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:304 :in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/resque-scheduler-1.09.7.gemspec.rz) (Gem::RemoteFetcher::FetchError) I've tried bundle update, gem source -c, gem update --system, gem cleanup, etc etc. Nothing seems to solve this. I notice that the URL beginning with http://rubygems.org/quick does seem to be a 404 -- I don't think that's any problem with my network, though if that's reachable for anyone else then that would be a simple explanation for my problem. More hints: If I just gem install resque-scheduler it works fine: $ gem install resque-scheduler Successfully installed resque-scheduler-1.9.7 1 gem installed Installing ri documentation for resque-scheduler-1.9.7... Installing RDoc documentation for resque-scheduler-1.9.7... And here's my Gemfile: source 'http://rubygems.org' gem 'json' gem 'rails', '>=3.0.0' gem 'mongo' gem 'mongo_mapper', :git => 'git://github.com/jnunemaker/mongomapper', :branch => 'rails3' gem 'bson_ext', '1.1' gem 'bson', '1.1' gem 'mm-multi-parameter-attributes', :git=>'git://github.com/rlivsey/mm-multi-parameter-attributes.git' gem 'devise', '~>1.1.3' gem 'devise_invitable', '~> 0.3.4' gem 'devise-mongo_mapper', :git => 'git://github.com/collectiveidea/devise-mongo_mapper' gem 'carrierwave', :git => 'git://github.com/rsofaer/carrierwave.git' , :branch => 'master' gem 'mini_magick' gem 'jquery-rails', '>= 0.2.6' gem 'resque' gem 'resque-scheduler' gem 'SystemTimer' gem 'capistrano' gem 'will_paginate', '3.0.pre2' gem 'twitter', '~> 1.0.0' gem 'oauth', '~> 0.4.4'

    Read the article

  • Encryption puzzle / How to create a PassStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >