Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 107/509 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Why is apt-cache so slow?

    - by Damn Terminal
    After upgrade to Trusty (14.04) from Saucy (13.10), all apt operations are very slow. Even those that do not include downloading anything, or connecting to any servers. For example, displaying the apt policy # time apt-cache policy [...] real 0m8.951s user 0m5.069s sys 0m3.861s takes almost ten seconds! Mostly a weird lag right after issuing the command. And it's the same even if I issue the same command again. On another system it doesn't take a tenth of a second real 0m0.096s user 0m0.070s sys 0m0.023s The other system is a little beefier but there was no noticeable difference before the upgrade. It's the same with apt-get, anything apt-related. How do I find out the source of this lag and fix it? Additional info: # cat /etc/nsswitch.conf # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis BTW is my understanding of how apt-cache works correct? It doesn't make any network connections when I run apt-cache policy, right? In case I'm wrong and it matters, here are my sources https://gist.github.com/anonymous/02920270ff68e23fc3ec

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • RDBMS???????Key-Value?

    - by ?? ?
    ??????????????????????RDBMS????????????????????????????????????????????? ?????????????DB??(?)???????????Key-Value?DB???? ????????????????????????·?????????? ?????DB???TimesTen??Key-Value????Coherence????????? ?????????????????????????????? ????????????RDBMS??????????????????? ??????????????????????????? ???????????????????????????????????? ????????????Web???????????(????????????????) ??????????????????????BI?????????????????? ??????????????? ????ERP??????????????????????????????? ????????????????????????????? ?????????????(???????)?? ? ?????????????????????????????????? ????????(??????????????????????Key-Value? ???????) ????????????????? ? ??????????????????????????????????????????? ???????????????????????(???5???10??????????) ????????????????????? ? ??????????????????????????????????????? ????????????????????????????????????????? ?????????????????????????????? ?????TB?????????????????? ? ???????????????????????????????????? ?????? ????????????? ? ?????????????????(??)?????????? ???????????Key-Value???????????????????????????? ????(??????????????RDBMS???????)? ??????????????????????????????????????? ??????????????????????????????????????? ??????????? ???????????????????????????????????????? ??????????????????????????????????? ????????????????? ??????????DB???????????????????????? ????????????????????????????????????? ????? ????????????????????????????? ?????????????????????????????????? ??????????????????????????????????????? ?????????????????????????????????????(??????) ??????????????????????????????????????? ???????·??????????????????????????????? ?????????? ???????????????????

    Read the article

  • Variable in one query is getting into another query in view

    - by Jason Shultz
    I have two foreach statements. The variable from one one is somehow getting into another and i'm not sure how to fix it. Here's my controller: // Categories Page Code function categories($id) { $this->load->model('Business_model'); $data['businessList'] = $this->Business_model->categoryPageList($id); $data['catList'] = $this->Business_model->categoryList(); $data['featured'] = $this->Business_model->frontPageList(); $data['user_id'] = $this->tank_auth->get_user_id(); $data['username'] = $this->tank_auth->get_username(); $data['page_title'] = 'Welcome To Jerome - Largest Ghost Town in America'; $data['page'] = 'category_view'; // pass the actual view to use as a parameter $this->load->view('container',$data); } What happens is categories will only show the businesses in a certain category. The businesses are pulled from the database using the categoryPageList($id) function. Here is that function: function categoryPageList($id) { $this->db->select('b.id, b.busname, b.busowner, b.webaddress, p.thumb, v.title, c.catname, s.specname, p.thumb, c.id'); $this->db->from ('business AS b'); $this->db->where('b.category', $id); $this->db->join('photos AS p', 'p.busid = b.id', 'left'); $this->db->join('video AS v', 'v.busid = b.id', 'left'); $this->db->join('specials AS s', 's.busid = b.id', 'left'); $this->db->join('category As c', 'b.category = c.id', 'left'); $this->db->group_by("b.id"); return $this->db->get(); } And here is the view: <h2>Welcome to Jerome, Arizona</h2> <p>Choose the Category of Business you are interested in:<br/> <?php foreach ($catList->result() as $row): ?> <a href="/site/categories/<?=$row->id?>"><?=$row->catname?></a>, &nbsp; <?php endforeach; ?></p> <table id="businessTable" class="tablesorter"> <thead><tr><th>Business Name</th><th>Business Owner</th><th>Web</th><th>Photos</th><th>Videos</th><th>Specials</th></tr></thead> <?php if(count($businessList) > 0) : foreach ($businessList->result() as $crow): ?> <tr> <td><a href="/site/business/<?=$crow->id?>"><?=$crow->busname?></a></td> <td><?=$crow->busowner?></td> <td><a href="<?=$crow->webaddress?>">Visit Site</a></td> <td> <?php if(isset($crow->thumb)):?> yes <?php else:?> no <?php endif?> </td> <td> <?php if(isset($crow->title)):?> yes <?php else:?> no <?php endif?> </td> <td> <?php if(isset($crow->specname)):?> yes <?php else:?> no <?php endif?> </td> </tr> <?php endforeach; ?> <?php else : ?> <td colspan="4"><p>No Category Selected</p></td> <?php endif; ?> </table> The problem occurs here. <?=$crow->id?> should be showing the row id from the business table. Instead, it's showing the row ID of the category table. so, if i'm viewing /site/categories/6 <?=$crow->id?> will show 6 when it should be showing 10 (the row ID of the only business in that category at this time. How can I fix this?

    Read the article

  • Runtime error of TASM language help!

    - by dominoos
    .model small .stack 400h .data message db "hello. ", 0ah, 0dh, "$" firstdigit db ? seconddigit db ? thirddigit db ? number dw ? newnumber db ? anumber dw 0d bnumber dw 0d Firstn db 0ah, 0dh, "Enter first 3 digit number: ","$" secondn db 0ah, 0dh, "Enter second 3 digit number: ","$" messageB db 0ah, 0dh, "HCF of two number is: ","$" linebreaker db 0ah, 0dh, ' ', 0ah, 0dh, '$' .code Start: mov ax, @data ; establish access to the data segment mov ds, ax ; mov number, 0d mov dx, offset message ; print the string "yob choi 0648293" mov ah, 9h int 21h num: mov dx, offset Firstn ; print the string "put 1st 3 digit" mov ah, 9h int 21h ;run JMP FirstFirst ; jump to FirstFirst FirstFirst: ;first digit mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov firstdigit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, doubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 100d ;This is so we can calculate 100*1st digit +10*2nd digit + 3rd digit mul cx ;start to accumulate the 3 digit number in the variable imul cx ;it is understood that the other operand is ax ; the result will use both dx::ax ;dx will contain only leading zeros add anumber, ax ;save ;Second Digit mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov seconddigit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 10d ;continue to accumulate the 3 digit number in the variable mul cx ;it is understood that the other operand is ax, containing first digit ;the result will use both dx::ax ;dx will contain only leading zeros. add anumber, ax ;save ;third Digit mov ah, 1d ;samething as above int 21h ; mov thirddigit, al ; sub al, 30h ; cbw ; add anumber, ax ; jmp num2 ;go to checks Num2: mov dx, offset secondn ; print the string "put 2nd 3 digits" mov ah, 9h int 21h ;run JMP SecondSecond SecondSecond: ;first digit mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov firstdigit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, doubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 100d ;This is so we can calculate 100*1st digit +10*2nd digit + 3rd digit mul cx ;start to accumulate the 3 digit number in the variable imul cx ;it is understood that the other operand is ax ; the result will use both dx::ax ;dx will contain only leading zeros add bnumber, ax ;save ;Second Digit mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov seconddigit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 10d ;continue to accumulate the 3 digit number in the variable mul cx ;it is understood that the other operand is ax, containing first digit ;the result will use both dx::ax ;dx will contain only leading zeros. add bnumber, ax ;save ;third Digit mov ah, 1d ;samething as above int 21h ; mov thirddigit, al ; sub al, 30h ; cbw ; add bnumber, ax ; jmp compare ;go to compare compare: CMP ax, anumber ;comparing numbB and Number JA comp1 ;go to comp1 if anumber is bigger CMP ax, anumber ; JB comp2 ;go to comp2 if anumber is smaller CMP ax, anumber ; JE equal ;go to equal if two numbers are the same JMP compare ;go to compare (avioding error) comp1: SUB ax, anumber; subtract smaller number from bigger number JMP compare ; comp2: SUB anumber, ax; subtract smaller number from bigger number JMP compare ; equal: mov ah, 9d ;make linkbreak after the 2nd 3 digit number mov dx, offset linebreaker int 21h mov ah, 9d ;print "HCF of two number is:" mov dx, offset messageB int 21h mov ax,anumber ;copying 2nd number into ax add al,30h ; converting to ascii mov newnumber,al ; copying from low part of register into newnumb mov ah, 2d ;bios code for print a character mov dl, newnumber ;we had saved the ascii code here int 21h ;call to bios JMP exit; exit: mov ah, 4ch int 21h ;exit the program End hi, this is a program that finds highest common factor of 2 different 3digit number. if i put 200, 235,312 (low numbers) it works fine. but if i put 500, 550, 654(bigger number) the program crashes after the 2nd 3digit number is entered. can you help me to find out what problem is?

    Read the article

  • Android SQLite Problem: Program Crash When Try a Query!

    - by Skatephone
    Hi i have a problem programming with android SDK 1.6. I'm doing the same things of the "notepad exaple" but the programm crash when i try some query. If i try to do a query directly in to the DatabaseHelper create() metod it goes, but out of this function it doesn't. Do you have any idea? this is the source: public class DbAdapter { public static final String KEY_NAME = "name"; public static final String KEY_TOT_DAYS = "totdays"; public static final String KEY_ROWID = "_id"; private static final String TAG = "DbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "flowratedb"; private static final String DATABASE_TABLE = "girl_data"; private static final String DATABASE_TABLE_2 = "girl_cyle"; private static final int DATABASE_VERSION = 2; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table "+DATABASE_TABLE+" (id integer, name text not null, totdays int);"; private static final String DATABASE_CREATE_2 = "create table "+DATABASE_TABLE_2+" (ref_id integer, day long not null);"; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); db.execSQL(DATABASE_CREATE_2); db.delete(DATABASE_TABLE, null, null); db.delete(DATABASE_TABLE_2, null, null); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE_2); onCreate(db); } } public DbAdapter(Context ctx) { this.mCtx = ctx; } public DbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createGirl(int id,String name, int totdays) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_ROWID, id); initialValues.put(KEY_NAME, name); initialValues.put(KEY_TOT_DAYS, totdays); return mDb.insert(DATABASE_TABLE, null, initialValues); } public long createGirl_fd_day(int refid, long fd) { ContentValues initialValues = new ContentValues(); initialValues.put("ref_id", refid); initialValues.put("calendar", fd); return mDb.insert(DATABASE_TABLE, null, initialValues); } public boolean updateGirl(int rowId, String name, int totdays) { ContentValues args = new ContentValues(); args.put(KEY_NAME, name); args.put(KEY_TOT_DAYS, totdays); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } public boolean deleteGirlsData() { if (mDb.delete(DATABASE_TABLE_2, null, null)>0) if(mDb.delete(DATABASE_TABLE, null, null)>0) return true; return false; } public Bundle fetchAllGirls() { Bundle extras = new Bundle(); Cursor cur = mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, null, null, null, null, null); cur.moveToFirst(); int tot = cur.getCount(); extras.putInt("tot", tot); int index; for (int i=0;i<tot;i++){ index=cur.getInt(cur.getColumnIndex("_id")); extras.putString("name"+index, cur.getString(cur.getColumnIndex("name"))); extras.putInt("totdays"+index, cur.getInt(cur.getColumnIndex("totdays"))); } cur.close(); return extras; } public Cursor fetchGirl(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public Cursor fetchGirlCD(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE_2, new String[] {"ref_id", "day"}, "ref_id=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } } Tank's Valerio From Italy :)

    Read the article

  • Server side Xforms form validation and integration into ASP.NET

    - by Nigel
    I have recently been investigating methods of creating web-based forms for an ASP.NET web application that can be edited and managed at runtime. For example an administrator might wish to add a new validation rule or a new set of fields. The holy grail would provide a means of specifying a form along with (potentially very complex) arbitrary validation rules, and allocation of data sources for each field. The specification would then be used to update the deployed form in the web application which would then validate submissions both on the client side and on the server side. My investigations led me to Xforms and a number of technologies that support it. One solution appears to be IBM Lotus Forms, but this requires a very large investment in terms of infrastructure, which makes it infeasible, although the forms designer may be useful as a stand-alone tool for creating the forms. I have also discounted browser plug-ins as the form must be publicly visible and cross-browser compliant. I have noticed that there are numerous javascript libraries that provide client side implementations given an Xforms schema. These would provide a partial solution but server side validation is still a requirement. Another option seems to involve the use of server side solutions such as the Java application Orbeon. Orbeon provides a tool for specifying the forms (although not as rich as Lotus Forms Designer), but the most interesting point is that it can translate an XForms schema into an XHTML form complete with validation. The fact that it is written in Java is not a big problem if it is possible to integrate with the existing ASP.NET application. So my question is whether anyone has done this before. It sounds like a problem that should have been solved but is inherently very complex. It seems possible to use an off-the-shelf tool to design the form and export it to an Xforms schema and xhtml form, and it seems possible to take that xforms schema and form and publish it using a client side library. What seems to be difficult is providing a means of validating the form submission on the server side and integrating the process nicely with .NET (although it seems the .NET community doesn't involve themselves with XForms; please correct me if I'm wrong on this count). I would be more than happy if a product provided something simple like a web service that could validate a submission against a schema. Maybe Orbeon does this but I'd be grateful if somebody in the know could point me in the right direction before I research it further. Many thanks.

    Read the article

  • Serialization Error:Unable to generate a temporary class (result=1)

    - by ltech
    Running XSD.exe on my xml to generate C# class. All works well except on this property [System.Xml.Serialization.XmlArrayAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] [System.Xml.Serialization.XmlArrayItemAttribute("ATTRIBUTES", typeof(DOCUMENTDocumentATTRIBUTES), Form=System.Xml.Schema.XmlSchemaForm.Unqualified, IsNullable=false)] public DocumentATTRIBUTES[][] Document { get { return this.documentField; } set { this.documentField = value; } } I want to try and use CollectionBase, and this was my attempt public DocumentATTRIBUTESCollection Document { get { return this.documentField; } set { this.documentField = value; } } /// <remarks/> [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)] public partial class DocumentATTRIBUTES { private string _author; private string _maxVersions; private string _summary; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string author { get { return _author; } set { _author = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string max_versions { get { return _maxVersions; } set { _maxVersions = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string summary { get { return _summary; } set { _summary = value; } } } public class DocumentAttributeCollection : System.Collections.CollectionBase { public DocumentAttributeCollection() : base() { } public DocumentATTRIBUTES this[int index] { get { return (DocumentATTRIBUTES)this.InnerList[index]; } } public void Insert(int index, DocumentATTRIBUTES value) { this.InnerList.Insert(index, value); } public int Add(DocumentATTRIBUTES value) { return (this.InnerList.Add(value)); } } However when I try to serialize my object using XmlSerializer serializer = new XmlSerializer(typeof(DocumentMetaData)); I get the error: {"Unable to generate a temporary class (result=1).\r\nerror CS0030: Cannot convert type 'DocumentATTRIBUTES' to 'DocumentAttributeCollection'\r\nerror CS1502: The best overloaded method match for 'DocumentAttributeCollection.Add(DocumentATTRIBUTES)' has some invalid arguments\r\nerror CS1503: Argument '1': cannot convert from 'DocumentAttributeCollection' to 'DocumentATTRIBUTES'\r\n"} the XSD pertaining to this property is <xs:complexType> <xs:sequence> <xs:element name="ATTRIBUTES" minOccurs="0" maxOccurs="unbounded"> <xs:complexType> <xs:sequence> <xs:element name="author" type="xs:string" minOccurs="0" /> <xs:element name="max_versions" type="xs:string" minOccurs="0" /> <xs:element name="summary" type="xs:string" minOccurs="0" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element>

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

  • Neo4j Windows Plugin Installation desktop V2.M06

    - by user2904850
    I have downloaded and installed the latest Window V2 Community M06 build of Neo4j on a windows 7 64 bit machine (I have tried the 32bit and 64 bit installs). Installation proceeds without and problems and the system runs normally but there is no /plugins directory (on both versions):- \Program Files (x86)\Neo4j Community\ \Program Files (x86)\Neo4j Community\.install4j \Program Files (x86)\Neo4j Community\bin I am trying to install the Spatial plugin .. so I tried creating the \plugins directory. I extracted the zip file and left the zip file in the directory but the plugins are not found:- C:\Users\WFN44217>curl localhost:7474/db/data/ { "extensions" : { }, "node" : "http://localhost:7474/db/data/node", "reference_node" : "http://localhost:7474/db/data/node/0", "node_index" : "http://localhost:7474/db/data/index/node", "relationship_index" : "http://localhost:7474/db/data/index/relationship", "extensions_info" : "http://localhost:7474/db/data/ext", "relationship_types" : "http://localhost:7474/db/data/relationship/types", "batch" : "http://localhost:7474/db/data/batch", "cypher" : "http://localhost:7474/db/data/cypher", "transaction" : "http://localhost:7474/db/data/transaction", "neo4j_version" : "2.0.0-M06" } I have tried some other plugins, but these are also not found. Any idea what might be missing? Extract from the log files: 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: VM Arguments: [-Dexe4j.semaphoreName=Local\c:_program_files_neo4j_community_bin_neo4j-community.exe, -Dexe4j.isInstall4j=true, -Dexe4j.moduleName=C:\Program Files\Neo4j Community\bin\neo4j-community.exe, -Dexe4j.processCommFile=C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp, -Dexe4j.tempDir=, -Dexe4j.unextractedPosition=0, -Djava.library.path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin\;C:\Program Files\Java\jdk1.7.0_40\bin\;C:\Program Files (x86)\Git\cmd;C:\Program Files (x86)\Git\bin;c:\ikvm-7.2.4630.5\bin;C:\Program Files\nodejs\;C:\Users\WFN44217\AppData\Roaming\npm;c:\program files\neo4j community\jre\bin, -Dexe4j.consoleCodepage=cp0, -Dinstall4j.launcherId=24, -Dinstall4j.swt=false] 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: Java classpath: 2013-10-17 11:19:31.883+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\charsets.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\.install4j\i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/jaccess.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/zipfs.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\resources.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jfr.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jsse.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\bin\neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunec.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\classes 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\rt.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/ 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunmscapi.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dns_sd.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dnsns.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunjce_provider.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/localedata.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/access-bridge-64.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/.install4j/i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\sunrsasign.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jce.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: Library path: 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\wbem 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\WindowsPowerShell\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft\Web Platform Installer 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Java\jdk1.7.0_40\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\cmd 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\ikvm-7.2.4630.5\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\nodejs 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Users\WFN44217\AppData\Roaming\npm 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Neo4j Community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: System.properties: 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.moduleName = C:\Program Files\Neo4j Community\bin\neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.processCommFile = C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.semaphoreName = Local\c:_program_files_neo4j_community_bin_neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.boot.library.path = c:\program files\neo4j community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.consoleCodepage = cp0 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: path.separator = ; 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: file.encoding.pkg = sun.io 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.country = GB 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.script = 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.os.patch.level = Service Pack 1 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: install4j.exeDir = C:\Program Files\Neo4j Community\bin\ 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: user.dir = C:\Program Files\Neo4j Community\bin

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • Please Critique this PHP Login Script

    - by NightMICU
    Greetings, A site I developed was recently compromised, most likely by a brute force or Rainbow Table attack. The original log-in script did not have a SALT, passwords were stored in MD5. Below is an updated script, complete with SALT and IP address banning. In addition, it will send a Mayday email & SMS and disable the account should the same IP address or account attempt 4 failed log-ins. Please look it over and let me know what could be improved, what is missing, and what is just plain strange. Many thanks! <?php //Start session session_start(); //Include DB config include $_SERVER['DOCUMENT_ROOT'] . '/includes/pdo_conn.inc.php'; //Error message array $errmsg_arr = array(); $errflag = false; //Function to sanitize values received from the form. Prevents SQL injection function clean($str) { $str = @trim($str); if(get_magic_quotes_gpc()) { $str = stripslashes($str); } return $str; } //Define a SALT, the one here is for demo define('SALT', '63Yf5QNA'); //Sanitize the POST values $login = clean($_POST['login']); $password = clean($_POST['password']); //Encrypt password $encryptedPassword = md5(SALT . $password); //Input Validations //Obtain IP address and check for past failed attempts $ip_address = $_SERVER['REMOTE_ADDR']; $checkIPBan = $db->prepare("SELECT COUNT(*) FROM ip_ban WHERE ipAddr = ? OR login = ?"); $checkIPBan->execute(array($ip_address, $login)); $numAttempts = $checkIPBan->fetchColumn(); //If there are 4 failed attempts, send back to login and temporarily ban IP address if ($numAttempts == 1) { $getTotalAttempts = $db->prepare("SELECT attempts FROM ip_ban WHERE ipAddr = ? OR login = ?"); $getTotalAttempts->execute(array($ip_address, $login)); $totalAttempts = $getTotalAttempts->fetch(); $totalAttempts = $totalAttempts['attempts']; if ($totalAttempts >= 4) { //Send Mayday SMS $to = "[email protected]"; $subject = "Banned Account - $login"; $mailheaders = 'From: [email protected]' . "\r\n"; $mailheaders .= 'Reply-To: [email protected]' . "\r\n"; $mailheaders .= 'MIME-Version: 1.0' . "\r\n"; $mailheaders .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $msg = "<p>IP Address - " . $ip_address . ", Username - " . $login . "</p>"; mail($to, $subject, $msg, $mailheaders); $setAccountBan = $db->query("UPDATE ip_ban SET isBanned = 1 WHERE ipAddr = '$ip_address'"); $setAccountBan->execute(); $errmsg_arr[] = 'Too Many Login Attempts'; $errflag = true; } } if($login == '') { $errmsg_arr[] = 'Login ID missing'; $errflag = true; } if($password == '') { $errmsg_arr[] = 'Password missing'; $errflag = true; } //If there are input validations, redirect back to the login form if($errflag) { $_SESSION['ERRMSG_ARR'] = $errmsg_arr; session_write_close(); header('Location: http://somewhere.com/login.php'); exit(); } //Query database $loginSQL = $db->prepare("SELECT password FROM user_control WHERE username = ?"); $loginSQL->execute(array($login)); $loginResult = $loginSQL->fetch(); //Compare passwords if($loginResult['password'] == $encryptedPassword) { //Login Successful session_regenerate_id(); //Collect details about user and assign session details $getMemDetails = $db->prepare("SELECT * FROM user_control WHERE username = ?"); $getMemDetails->execute(array($login)); $member = $getMemDetails->fetch(); $_SESSION['SESS_MEMBER_ID'] = $member['user_id']; $_SESSION['SESS_USERNAME'] = $member['username']; $_SESSION['SESS_FIRST_NAME'] = $member['name_f']; $_SESSION['SESS_LAST_NAME'] = $member['name_l']; $_SESSION['SESS_STATUS'] = $member['status']; $_SESSION['SESS_LEVEL'] = $member['level']; //Get Last Login $_SESSION['SESS_LAST_LOGIN'] = $member['lastLogin']; //Set Last Login info $updateLog = $db->prepare("UPDATE user_control SET lastLogin = DATE_ADD(NOW(), INTERVAL 1 HOUR), ip_addr = ? WHERE user_id = ?"); $updateLog->execute(array($ip_address, $member['user_id'])); session_write_close(); //If there are past failed log-in attempts, delete old entries if ($numAttempts > 0) { //Past failed log-ins from this IP address. Delete old entries $deleteIPBan = $db->prepare("DELETE FROM ip_ban WHERE ipAddr = ?"); $deleteIPBan->execute(array($ip_address)); } if ($member['level'] != "3" || $member['status'] == "Suspended") { header("location: http://somewhere.com"); } else { header('Location: http://somewhere.com'); } exit(); } else { //Login failed. Add IP address and other details to ban table if ($numAttempts < 1) { //Add a new entry to IP Ban table $addBanEntry = $db->prepare("INSERT INTO ip_ban (ipAddr, login, attempts) VALUES (?,?,?)"); $addBanEntry->execute(array($ip_address, $login, 1)); } else { //increment Attempts count $updateBanEntry = $db->prepare("UPDATE ip_ban SET ipAddr = ?, login = ?, attempts = attempts+1 WHERE ipAddr = ? OR login = ?"); $updateBanEntry->execute(array($ip_address, $login, $ip_address, $login)); } header('Location: http://somewhere.com/login.php'); exit(); } ?>

    Read the article

  • Dynamic Multiple Choice (Like a Wizard) - How would you design it? (e.g. Schema, AI model, etc.)

    - by henry74
    This question can probably be broken up into multiple questions, but here goes... In essence, I'd like to allow users to type in what they would like to do and provide a wizard-like interface to ask for information which is missing to complete a requested query. For example, let's say a user types: "What is the weather like in Springfield?" We recognize the user is interested in weather, but it could be Springfield, Il or Springfield in another state. A follow-up question would be: What Springfield did you want weather for? 1 - Springfield, Il 2 - Springfield, Wi You can probably think of a million examples where a request is missing key data or its ambiguous. Make the assumption the gist of what the user wants can be understood, but there are missing pieces of data required to complete the request. Perhaps you can take it as far back as asking what the user wants to do and "leading" them to a query. This is not AI in the sense of taking any input and truly understanding it. I'm not referring to having some way to hold a conversation with a user. It's about inferring what a user wants, checking to see if there is an applicable service to be provided, identifying the inputs needed and overlaying that on top of what's missing from the request, then asking the user for the remaining information. That's it! :-) How would you want to store the information about services? How would you go about determining what was missing from the input data? My thoughts: Use regex expressions to identify clear pieces of information. These will be matched to the parameters of a service. Figure out which parameters do not have matching data and look up the associated question for those parameters. Ask those questions and capture answers. Re-run the service passing in the newly captured data. These would be more free-form questions. For multiple choice, identify the ambiguity and search for potential matches ranked in order of likelihood (add in user history/preferences to help decide). Provide the top 3 as choices. Thoughts appreciated. Cheers, Henry

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • ?11.2RAC??????????????

    - by JaneZhang(???)
           ?????,???????????????,???dbca???????,???????????dbca,?????????11.2???????,???????,??dbca??????????????????,????????????????     ????11.2???????RACDB2???,?????RACDB1? ?????rac1,????rac2?     ?11.2?,?????grid?????GI,??oracle????????,????????oracle?????? 1. ??????????????????,?????,???????????:audit_file_dest, background_dump_dest, user_dump_dest ?core_dump_dest????audit_file_dest=/u01/app/oracle/admin/RACDB/adump,?????????,?????????:ORA-09925: Unable to create audit trail fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 99252. ????????????????????????????:SQL> alter system set instance_number=2 scope=spfile sid='RACDB2';SQL> alter system set thread=2 scope=spfile sid='RACDB2';SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='RACDB2';SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.122)(PORT=1521))))' sid='RACDB2'; <=====192.0.2.122???2?VIP 3. ???????DB?$ORACLE_HOME/dbs/init<sid>.ora ?????DB?$ORACLE_HOME/dbs/init<sid>.ora,??????????????init<sid>.ora ????,????spfile???:=======================SPFILE='+DATA/racdb/spfileracdb.ora'??:[oracle@rac1 ~]$ scp $ORACLE_HOME/dbs/initRACDB1.ora rac2:$ORACLE_HOME/dbs/initRACDB2.ora <===????????24.  ??????/etc/oratab,????????????:RACDB2:/u01/app/oracle/product/11.2.0/dbhome_1:N       5.  ???????????: DB?$ORACLE_HOME/dbs/ora<sid>.pwd ????DB?$ORACLE_HOME/dbs/ora<sid>.pwd,??????????????:[oracle@rac1 dbs]$ scp $ORACLE_HOME/dbs/orapwRACDB1 rac2:$ORACLE_HOME/dbs/orapwRACDB2 <==?????26.  ?????????????,????????UNDO TABLESPACE?(??????dbca?????,???????undo tablespace????,?????????)??:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/….' SIZE 4096M ;???????:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '+DATA' SIZE 4096M ;7.  ?????????????,????????redo thread?redo log:??:SQL> alter database add logfile thread 2      group 3 ('/dev/...', '/dev/...') size 1024M,     group 4 ('/dev/...','dev/...') size 1024M;???????:SQL> alter database add logfile thread 2     group 3 ('+DATA','+RECO') size 1024M,     group 4 ('+DATA','+RECO') size 1024M;SQL> alter database enable thread 2; <==????thread8.  ??????????,?????????????:[oracle@rac2 admin]$su - oracle[oracle@rac2 admin]$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1[oracle@rac2 admin]$export ORACLE_SID=RACDB2[oracle@rac2 admin]$ sqlplus / as sysdbaSQL> startup <==??????,???????2????????????9. ?????OCR???GI??,?????????????:$srvctl add instance -d <database name> -i <new instance name> -n <new node name>Example of srvctl add instance command:============================[oracle@rac2 ~]$ srvctl add instance -d racdb -i RACDB2 -n rac2  <==????????,????ps -ef|grep smon???[oracle@rac2 dbs]$ ps -ef|grep smonroot      3453     1  1 Jun12 ?        04:03:05 /u01/app/11.2.0/grid/bin/osysmond.bingrid      3727     1  0 Jun12 ?        00:00:19 asm_smon_+ASM2oracle    5343  4543  0 14:06 pts/1    00:00:00 grep smonoracle   28736     1  0 Jun25 ?        00:00:03 ora_smon_RACDB2 <========??????10. ???????:$su - grid[grid@rac2 ~]$ crsctl stat res -t...ora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        OFFLINE OFFLINE             rac2????,??????offline,????????????sqlplus??????sqlplus??????,???srvctl??:[grid@rac2 ~]$ su  - oraclePassword: [oracle@rac2 ~]$ sqlplus / as sysdbaSQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> exit[oracle@rac2 ~]$ srvctl start instance -d racdb -i RACDB2[oracle@rac2 ~]$ su - gridPassword: [grid@rac2 ~]$ crsctl stat res -tora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        ONLINE  ONLINE       rac2                     Open                11. ?????????:[oracle@rac2 ~]$ crsctl stat res ora.racdb.db -pNAME=ora.racdb.dbTYPE=ora.database.typeACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--ACTION_FAILURE_TEMPLATE=ACTION_SCRIPT=ACTIVE_PLACEMENT=1AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%AUTO_START=restoreCARDINALITY=2CHECK_INTERVAL=1CHECK_TIMEOUT=30CLUSTER_DATABASE=trueDATABASE_TYPE=RACDB_UNIQUE_NAME=RACDBDEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)DEGREE=1DESCRIPTION=Oracle Database resourceENABLED=1FAILOVER_DELAY=0FAILURE_INTERVAL=60FAILURE_THRESHOLD=1GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/RACDB/adumpGEN_START_OPTIONS=GEN_START_OPTIONS@SERVERNAME(rac1)=openGEN_START_OPTIONS@SERVERNAME(rac2)=openGEN_USR_ORA_INST_NAME=GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1HOSTING_MEMBERS=INSTANCE_FAILOVER=0LOAD=1LOGGING_LEVEL=1MANAGEMENT_POLICY=AUTOMATICNLS_LANG=NOT_RESTARTING_TEMPLATE=OFFLINE_CHECK_INTERVAL=0ONLINE_RELOCATION_TIMEOUT=0ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1ORACLE_HOME_OLD=PLACEMENT=restrictedPROFILE_CHANGE_TEMPLATE=RESTART_ATTEMPTS=2ROLE=PRIMARYSCRIPT_TIMEOUT=60SERVER_POOLS=ora.RACDBSPFILE=+DATA/RACDB/spfileRACDB.oraSTART_DEPENDENCIES=hard(ora.DATA.dg,ora.RECO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DATA.dg,ora.RECO.dg)START_TIMEOUT=600STATE_CHANGE_TEMPLATE=STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.RECO.dg)STOP_TIMEOUT=600TYPE_VERSION=3.2UPTIME_THRESHOLD=1hUSR_ORA_DB_NAME=RACDBUSR_ORA_DOMAIN=USR_ORA_ENV=USR_ORA_FLAGS=USR_ORA_INST_NAME=USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1USR_ORA_INST_NAME@SERVERNAME(rac2)=RACDB2USR_ORA_OPEN_MODE=openUSR_ORA_OPI=falseUSR_ORA_STOP_MODE=immediateVERSION=11.2.0.3.0???11.2,?OCR???database??,??????,???????????database???????database???????,??????,???????????????ASM????????????  ?:dbca ???????????:????????oracle????dbca:su - oracledbca?? RAC database?? Instance Management?? add an instance???active rac database??????? ??undo?redo??

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

  • How to run RCU from the command line

    - by Kevin Smith
    When I was trying to figure out how to run RCU on 64-bit Linux I found this post. It shows how to run RCU from the command line. It didn't actually work for me, so you can see my post on how to run RCU on 64-bit Linux. But, seeing how to run RCU from the command got me started thinking about running RCU from the command line to create the schema for WebCenter Content. That post got me part of the way there since it shows how run RCU silently from the command line, but to do this you need to know the name of the RCU component for WebCenter Content. I poked around in the RCU files and found the component name for WCC is CONTENTSERVER11. There is a contentserver11 directory in rcuHome/rcu/integration and when you look at the contentserver11.xml file you will see <RepositoryConfig COMP_ID="CONTENTSERVER11"> With the component name for WCC in hand I was able to use this command line to run RCU and create the schema for WCC. .../rcuHome/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521:orcl1 -dbUser sys -dbRole sysdba -schemaPrefix TEST -component CONTENTSERVER11 -f <rcu_passwords.txt To make the silent part work and not have it prompt you for the passwords needed (sys password and password for each schema) you use the -f option and specify a file containing the passwords, one per line, in the order the components are listed on the -component argument. Here is the output from rcu when I ran the above command. Processing command line ....Repository Creation Utility - Checking PrerequisitesChecking Global PrerequisitesRepository Creation Utility - Checking PrerequisitesChecking Component PrerequisitesRepository Creation Utility - Creating TablespacesValidating and Creating TablespacesRepository Creation Utility - CreateRepository Create in progress.Percent Complete: 0...Percent Complete: 100Repository Creation Utility: Create - Completion SummaryDatabase details:Host Name              : localhostPort                   : 1521Service Name           : ORCL1Connected As           : sysPrefix for (prefixable) Schema Owners : TESTRCU Logfile            : /u01/app/oracle/logdir.2012-09-26_07-53/rcu.logComponent schemas created:Component                            Status  LogfileOracle Content Server 11g - Complete Success /u01/app/oracle/logdir.2012-09-26_07-53/contentserver11.logRepository Creation Utility - Create : Operation Completed This works fine if you want to use the default tablespace sizes and options, but there does not seem to be a way to specify the tablespace options on the command line. You can specify the name of the tablespace and temp tablespace, but they must already exist in the database before running RCU. I guess you can always create the tablespaces first using your desired sizes and options and then run RCU and specify the tablespaces you created. When looking up the command line options in the RCU doc I found it has the list of components for each product that it supports. See Appendix B in the RCU User's Guide.

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • An open plea to Microsoft to fix the serializers in WCF.

    - by Scott Wojan
    I simply DO NOT understand how Microsoft can be this far along with a tool like WCF and it STILL tout it as being an "Enterprise" tool. For example... The following is a simple xsd schema with a VERY simple data contract that any enterprise would expect an "enterprise system" to be able to handle: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="Sample"     targetNamespace="http://tempuri.org/Sample.xsd"     elementFormDefault="qualified"     xmlns="http://tempuri.org/Sample.xsd"     xmlns:mstns="http://tempuri.org/Sample.xsd"     xmlns:xs="http://www.w3.org/2001/XMLSchema">    <xs:element name="SomeDataElement">     <xs:annotation>       <xs:documentation>This documents the data element. This sure would be nice for consumers to see!</xs:documentation>     </xs:annotation>     <xs:complexType>       <xs:all>         <xs:element name="Description" minOccurs="0">           <xs:simpleType>             <xs:restriction base="xs:string">               <xs:minLength value="0"/>               <xs:maxLength value="255"/>             </xs:restriction>           </xs:simpleType>         </xs:element>       </xs:all>       <xs:attribute name="IPAddress" use="required">         <xs:annotation>           <xs:documentation>Another explanation!  WOW!</xs:documentation>         </xs:annotation>         <xs:simpleType>           <xs:restriction base="xs:string">             <xs:pattern value="(([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.){3}([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])"/>           </xs:restriction>         </xs:simpleType>       </xs:attribute>     </xs:complexType>  </xs:element>   </xs:schema>  An minimal example xml document would be: <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10"> </SomeDataElement> With the max example being:  <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10">  <Description>ddd</Description> </SomeDataElement> This schema simply CANNOT be exposed by WCF.  Let's list why:  svcutil.exe will not generate classes for you because it can't read an xsd with xs:annotation. Even if you remove the documentation, the DataContractSerializer DOES NOT support attributes so IPAddress would become an element this not meeting the contract xsd.exe could generate classes but it is a very legacy tool, generates legacy code, and you still suffer from the following issues: NONE of the serializers support emitting of the xs:annotation documentation.  You'd think a consumer would really like to have as much documentation as possible! NONE of the serializers support the enforcement of xs:restriction so you can forget about the xs:minLength, xs:maxLength, or xs:pattern enforcement. Microsoft... please, please, please, please look at putting the work into your serializers so that they support the very basics of designing enterprise data contracts!!

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • Egy DBA napja - Utópia, vagy sem?

    - by lsarecz
    Ma délelott a HOUG BI/DW és DB szakmai napján tartottam egy eloadást arról, hogyan kellene egy DBA-nak dolgoznia manapság. A lényeg az volt, hogy az üzlet legjobb kiszolgálása érdekében célszeru a felhasználók és az alkalmazás oldaláról megközelíteni a kérdést. Azaz monitorozzuk a felhasználók elégedettségét, majd ha valami nem stimmel fúrjunk le alsóbb rétegekbe (MW, DB, OS, HW, Storage, Network), és ott folytassuk a diagnosztikát. Meglepetésemre az eloadást követoen a DB szakosztály vezetoje úgy kommentálta az eloadást, hogy ez még utópisztikus. Ma délután egy a hallgatók között ülo üzemeltetési igazgató e-mail-ben jelezte, hogy ezzel o sem ért egyet, mert ok már most is így üzemeltetnek, még ha nem is minden elemében a Grid Control-t használják. És mi sem bizonyítja jobban, hogy ez az üzemeltetési model nem utópia, mint hogy az Oracle Enterprise Manager 11g tökéletes támogatást ad minderre. Akit ezek után érdekel az eloadásom, az innen letöltheti: 1. fele, 2. fele.

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • state: pending & public-address: null :: Juju setup on local machine

    - by Danny Kopping
    I've setup Juju on a local VM inside VMWare running on Mac OSX. Everything seems to be working fine, except when I deployed MySQL & WordPress from the examples, I get the following when I run juju status: danny@ubuntu:~$ juju status machines: 0: dns-name: localhost instance-id: local instance-state: running state: down services: mysql: charm: local:oneiric/mysql-11 relations: db: wordpress units: mysql/0: machine: 0 public-address: 192.168.122.107 relations: db: state: up state: started wordpress: charm: local:oneiric/wordpress-31 exposed: true relations: db: mysql units: wordpress/0: machine: 0 open-ports: [] public-address: null relations: {} state: pending state: pending and public-address: null Can't find any documentation relating to this issue. Any help very much appreciated - wonderful idea for a project!

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >