Search Results

Search found 23131 results on 926 pages for 'ms query'.

Page 253/926 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • NHibernate LINQ query throws error "Could not resolve property"

    - by Xorandor
    I'm testing out using LINQ with NHibernate but have run into some problems with resolving string.length. I have the following public class DC_Control { public virtual int ID { get; private set; } public virtual string Name { get; set; } public virtual bool IsEnabled { get; set; } public virtual string Url { get; set; } public virtual string Category { get; set; } public virtual string Description { get; set; } public virtual bool RequireScriptManager { get; set; } public virtual string TriggerQueryString { get; set; } public virtual DateTime? DateAdded { get; set; } public virtual DateTime? DateUpdated { get; set; } } public class DC_ControlMap : ClassMap<DC_Control> { public DC_ControlMap() { Id(x => x.ID); Map(x => x.Name).Length(128); Map(x => x.IsEnabled); Map(x => x.Url); Map(x => x.Category); Map(x => x.Description); Map(x => x.RequireScriptManager); Map(x => x.TriggerQueryString); Map(x => x.DateAdded); Map(x => x.DateUpdated); } } private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2008) .Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())) .ExposeConfiguration(c => c.SetProperty("connection.connection_string", "CONNSTRING")) .ExposeConfiguration(c => c.SetProperty("proxyfactory.factory_class", "NHibernate.ByteCode.Castle.ProxyFactoryFactory,NHibernate.ByteCode.Castle")) .BuildSessionFactory(); } public static void test() { using (ISession session = sessionFactory.OpenSession()) { var sqlQuery = session.CreateSQLQuery("select * from DC_Control where LEN(url) > 80").AddEntity(typeof(DC_Control)).List<DC_Control>(); var linqQuery= session.Linq<DC_Control>().Where(c => c.Url.Length > 80).ToList(); } } In my test method I first try and perform the query using SQL, this works just fine. Then I want to do the same thing in LINQ, and it throws the following error: NHibernate.QueryException: could not resolve property: Url.Length of: DC_Control I've searched alot for this "could not resolve property" error, but I can't quite figure out, what this means. Is this because the LINQ implementation is not complete? If so it's a bit disappointing coming from Linq2Sql where this would just work. I also tried it setting up the mapping with a hbm.xml instead of using FluentNHibernate but it produced teh same error.

    Read the article

  • MOSS Content Query Web part itemstyle.xsl

    - by nav
    Hi, I have a Content Query Webpart (CQWP) pulling the URL and title from a NewsLinks list. The CQWP uses the XSLT style LVIS.News.Links defined in ItemStyle.xsl. I need to sort the title @Title0 field as commented out below because it causes an error. Does anyone know whats causing this error? - Many Thanks. The XSLT code is below: <xsl:template name="LVIS.News.Links" match="Row[@Style='LVIS.News.Links']" mode="itemstyle"> <xsl:param name="CurPos" /> <xsl:param name="Last" /> <xsl:variable name="SafeLinkUrl"> <xsl:call-template name="OuterTemplate.GetSafeLink"> <xsl:with-param name="UrlColumnName" select="'LinkUrl'"/> </xsl:call-template> </xsl:variable> <xsl:variable name="DisplayTitle"> <xsl:call-template name="OuterTemplate.GetTitle"> <xsl:with-param name="Title" select="@URL"/> <xsl:with-param name="UrlColumnName" select="'URL'"/> </xsl:call-template> </xsl:variable> <xsl:variable name="LinkTarget"> <xsl:if test="@OpenInNewWindow = 'True'" >_blank</xsl:if> </xsl:variable> <xsl:variable name="SafeImageUrl"> <xsl:call-template name="OuterTemplate.GetSafeStaticUrl"> <xsl:with-param name="UrlColumnName" select="'ImageUrl'"/> </xsl:call-template> </xsl:variable> <xsl:variable name="Header"> <xsl:if test="$CurPos = 1"> <![CDATA[<ul class="list_Links">]]> </xsl:if> </xsl:variable> <xsl:variable name="Footer"> <xsl:if test="$Last = $CurPos"> <![CDATA[</ul>]]> </xsl:if> </xsl:variable> <xsl:value-of select="$Header" disable-output-escaping="yes" /> <li> <a> <xsl:attribute name="href"> <xsl:value-of select="substring-before($DisplayTitle,', ')"></xsl:value-of> </xsl:attribute> <xsl:attribute name="title"> <xsl:value-of select="@Description"/> </xsl:attribute> <!-- <xsl:sort select="@Title0"/> --> <xsl:value-of select="@Title0"> </xsl:value-of> </a> </li> <xsl:value-of select="$Footer" disable-output-escaping="yes" /> </xsl:template>

    Read the article

  • Neo4j increasing latency as SKIP increases on Cypher query + REST API

    - by voldomazta
    My setup: Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Neo4j 2.0.0-M06 Enterprise First I made sure I warmed up the cache by executing the following: START n=node(*) RETURN COUNT(n); START r=relationship(*) RETURN count(r); The size of the table is 63,677 nodes and 7,169,995 relationships Now I have the following query: START u1=node:node_auto_index('uid:39') MATCH (u1:user)-[w:WANTS]->(c:card)<-[h:HAS]-(u2:user) WHERE u2.uid <> 39 WITH u2.uid AS uid, (CASE WHEN w.qty < h.qty THEN w.qty ELSE h.qty END) AS have RETURN uid, SUM(have) AS total ORDER BY total DESC SKIP 0 LIMIT 25 This UID has about 40k+ results that I want to be able to put a pagination to. The initial skip was around 773ms. I tried page 2 (skip 25) and the latency was around the same even up to page 500 it only rose up to 900ms so I didn't really bother. Now I tried some fast forward paging and jumped by thousands so I did 1000, then 2000, then 3000. I was hoping the ORDER BY arrangement will already have been cached by Neo4j and using SKIP will just move to that index in the result and wont have to iterate through each one again. But for each thousand skip I made the latency increased by alot. It's not just cache warming because for one I already warmed up the cache and two, I tried the same skip a couple of times for each skip and it yielded the same results: SKIP 0: 773ms SKIP 1000: 1369ms SKIP 2000: 2491ms SKIP 3000: 3899ms SKIP 4000: 5686ms SKIP 5000: 7424ms Now who the hell would want to view 5000 pages of results? 40k even?! :) Good point! I will probably put a cap on the maximum results a user can view but I was just curious about this phenomenon. Will somebody please explain why Neo4j seems to be re-iterating through stuff which appears to be already known to it? Here is my profiling for the 0 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(0)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14 of type Any),false)"], limit="Add(Literal(0),Literal(25))", _rows=25, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) And for the 5000 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(5000)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872 of type Any),false)"], limit="Add(Literal(5000),Literal(25))", _rows=5025, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) The only difference is the LIMIT clause on the Top function. I hope we can make this work as intended, I really don't want to delve into doing an embedded Neo4j + my own Jetty REST API for the web app.

    Read the article

  • MySQL query - if not exists - insert into - else - update

    - by user3180931
    I made a simple document generator by the form, this form saves everything to mysql database, It works great, but when someone type a the same 'nrumowy' it creates a new row in mysql, 'nrumowy' is unique, so when someone adds a form with the same 'nrumowy' I want to just update existing data in mysql, I have that code: $con=mysqli_connect("localhost","login","pass","database"); // Check connection if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } // escape variables for security $numerklienta = mysqli_real_escape_string($con, $_POST['numerklienta']); $name = mysqli_real_escape_string($con, $_POST['name']); $hours = mysqli_real_escape_string($con, $_POST['hours']); $date = mysqli_real_escape_string($con, $_POST['date']); $beginDate = mysqli_real_escape_string($con, $_POST['beginDate']); $nrdomu = mysqli_real_escape_string($con, $_POST['nrdomu']); $telefon = mysqli_real_escape_string($con, $_POST['telefon']); $fax = mysqli_real_escape_string($con, $_POST['fax']); $nip = mysqli_real_escape_string($con, $_POST['nip']); $email = mysqli_real_escape_string($con, $_POST['email']); $stronawww = mysqli_real_escape_string($con, $_POST['stronawww']); $branza = mysqli_real_escape_string($con, $_POST['branza']); $vatkodpocztowy = mysqli_real_escape_string($con, $_POST['vatkodpocztowy']); $vatmiejscowosc = mysqli_real_escape_string($con, $_POST['vatmiejscowosc']); $vatulica = mysqli_real_escape_string($con, $_POST['vatulica']); $vatnrdomu = mysqli_real_escape_string($con, $_POST['vatnrdomu']); $vatemail = mysqli_real_escape_string($con, $_POST['vatemail']); $vatosoba = mysqli_real_escape_string($con, $_POST['vatosoba']); $datapublikacji = mysqli_real_escape_string($con, $_POST['datapublikacji']); $rabat = mysqli_real_escape_string($con, $_POST['rabat']); $wartoscnetto = mysqli_real_escape_string($con, $_POST['wartoscnetto']); $typreklamy = mysqli_real_escape_string($con, $_POST['typreklamy']); $inne = mysqli_real_escape_string($con, $_POST['inne']); $inne2 = mysqli_real_escape_string($con, $_POST['inne2']); $inne3 = mysqli_real_escape_string($con, $_POST['inne3']); $zaliczka = mysqli_real_escape_string($con, $_POST['zaliczka']); $liczbarat1 = mysqli_real_escape_string($con, $_POST['liczbarat1']); $zaakceptowaneprzez = mysqli_real_escape_string($con, $_POST['zaakceptowaneprzez']); $telzam = mysqli_real_escape_string($con, $_POST['telzam']); $datapodpis = mysqli_real_escape_string($con, $_POST['datapodpis']); $nrumowy = mysqli_real_escape_string($con, $_POST['nrumowy']); $sql="IF NOT EXISTS ( SELECT * FROM zam WHERE nrumowy = '$nrumowy' ) THEN INSERT INTO zam (numerklienta, name, hours, date, beginDate, nrdomu, telefon, fax, nip, email, stronawww, branza, vatkodpocztowy, vatmiejscowosc, vatulica, vatnrdomu, vatemail, vatosoba, datapublikacji, rabat, wartoscnetto, typreklamy, inne, inne2, inne3, zaliczka, liczbarat1, zaakceptowaneprzez, telzam, datapodpis, nrumowy) VALUES ('$numerklienta', '$name', '$hours', '$date', '$beginDate', '$nrdomu', '$telefon', '$fax', '$nip', '$email', '$stronawww', '$branza', '$vatkodpocztowy', '$vatmiejscowosc', '$vatulica', '$vatnrdomu', '$vatemail', '$vatosoba', '$datapublikacji', '$rabat', '$wartoscnetto', '$typreklamy', '$inne', '$inne2', '$inne3', '$zaliczka', '$liczbarat1', '$zaakceptowaneprzez', '$telzam', '$datapodpis', '$nrumowy' ) ELSE UPDATE zam SET name = '$name', numerklienta = '$numerklienta', hours = '$hours', date = '$date', beginDate = '$beginDate', nrdomu = '$nrdomu', telefon = '$telefon', fax = '$fax', nip = '$nip', email = '$email', stronawww = '$stronawww', branza = '$branza', vatkodpocztowy = '$vatkodpocztowy', vatmiejscowosc = '$vatmiejscowosc', vatulica = '$vatulica', vatnrdomu = '$vatnrdomu', vatemail = '$vatemail', vatosoba = '$vatosoba', datapublikacji = '$datapublikacji', rabat = '$rabat', wartoscnetto = '$wartoscnetto', typreklamy = '$typreklamy', inne = '$inne', inne2 = '$inne2', inne3 = '$inne3', zaliczka = '$zaliczka', liczbarat1 = '$liczbarat1', zaakceptowaneprzez = '$zaakceptowaneprzez', telzam = '$telzam', datapodpis = '$datapodpis' WHERE nrumowy ='$nrumowy' END IF"; if (!mysqli_query($con,$sql)) { die('Error: ' . mysqli_error($con)); } mysqli_close($con); This query without " select..... " and "else update" just a 'insert into' works great, also when I change this 'insert into' to 'update' but I don't know how to make this variable if not exists - insert into - else update

    Read the article

  • MySQL query works in PHPMyAdmin but not PHP

    - by Su4p
    I do not understand what's happening. I have a query in PHP who crashes -with a strange error-. When I copy/paste the exact same request in PHPMyAdmin it works as expected. What am I doing wrong here ? SELECT oms_patient.id, oms_patient.date, oms_patient.date_modif, date_modif, AES_DECRYPT(nom,"xxxxx") AS "Nom", AES_DECRYPT(prenom,"xxxxx") AS "Prénom usuel", DATE_FORMAT(ddn, "%d/%m/%Y") AS "Date de naissance", villeNaissance AS "Lieu de naissance (ville)", CONCAT(oms_departement.libelle,"(",id_departement,")") AS "Lieu de vie", CONCAT(oms_pays.libelle,"(",id_pays,")") AS "Pays", CONCAT(patientsexe.libelle,"(",id_sexe,")") AS "Sexe", CONCAT(patientprofession.libelle,"(",id_profession,")") AS "Profession", IF(asthme>0,"Oui","Non") AS "Asthme", IF(rhinite>0,"Oui","Non") AS "Rhinite", IF(bcpo>0,"Oui","Non") AS "BPCO", IF(insuffisanceResp>0,"Oui","Non") AS "Insuffisance respiratoire chronique", IF(chirurgieOrl>0,"Oui","Non") AS "Chirurgie ORL du ronflement", IF(autreChirurgie>0,"Oui","Non") AS "Autre chirurgie ORL", IF(allergies>0,"Oui","Non") AS "Allergies", IF(OLD>0,"Oui","Non") AS "OLD", IF(hypertensionArterielle>0,"Oui","Non") AS "Hypertension artérielle", IF(infarctusMyocarde>0,"Oui","Non") AS "Infarctus du myocarde", IF(insuffisanceCoronaire>0,"Oui","Non") AS "Insuffisance coronaire", IF(troubleRythme>0,"Oui","Non") AS "Trouble du rythme", IF(accidentVasculaireCerebral>0,"Oui","Non") AS "Accident vasculaire cérébral", IF(insuffisanceCardiaque>0,"Oui","Non") AS "Insuffisance cardiaque", IF(arteriopathie>0,"Oui","Non") AS "Artériopathie", IF(tabagismeActuel>0,"Oui","Non") AS "Tabagisme actuel", CONCAT(nbPaquetsActuel," ","PA") AS "", IF(tabagismeAncien>0,"Oui","Non") AS "Tabagisme ancien", CONCAT(nbPaquetsAncien," ","PA") AS "", IF(alcool>0,"Oui","Non") AS "Alcool (conso régulière)", IF(refluxGastro>0,"Oui","Non") AS "Reflux gastro-oesophagien", IF(glaucome>0,"Oui","Non") AS "Glaucome", IF(diabete>0,"Oui","Non") AS "Diabète", CONCAT(patienttypeDiabete.libelle,"(",id_typeDiabete,")") AS "", IF(hypercholesterolemie>0,"Oui","Non") AS "Hypercholestérolémie", IF(hypertriglyceridemie>0,"Oui","Non") AS "Hypertriglycéridémie", IF(dysthyroidie>0,"Oui","Non") AS "Dysthyroïdie", IF(depression>0,"Oui","Non") AS "Dépression", IF(sedentarite>0,"Oui","Non") AS "Sédentarité", IF(syndromeDApneesSommeil>0,"Oui","Non") AS "SAS", IF(obesite>0,"Oui","Non") AS "Obésité", IF(dysmorphieFaciale>0,"Oui","Non") AS "Dysmorphie faciale", TextObservations AS "", id_user FROM oms_patient LEFT JOIN oms_departement ON oms_departement.id = id_departement LEFT JOIN oms_pays ON oms_pays.id = id_pays LEFT JOIN patientsexe ON patientsexe.id = id_sexe LEFT JOIN patientprofession ON patientprofession.id = id_profession LEFT JOIN patienttypeDiabete ON patienttypeDiabete.id = id_typeDiabete WHERE oms_patient.id=1 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'small"(conso régulière)", IF(refluxGastro0,"Oui","Non") as "Reflux ga' at line 1 "near 'small" <-- where is small o_O The PHP code isn't really relevant cause you won't see a lot. $db = mysql_connect(); mysql_select_db();//TODO SWITCH TO PDO mysql_query("SET NAMES UTF8"); $fields = $form->getFields($form); $settingsForm = $form->getSettings(); $sql = 'SELECT oms_patient.id,oms_patient.date,oms_patient.date_modif,'; foreach ($fields as $field) { if (!$field->isMultiSelect()) { $field->select_full(&$sql, 'oms_patient', null); } } if (isset($settingsForm['linkTo'])) { $idLinkTo = 'id_' . str_replace('oms_', '', $settingsForm['linkTo']); $sql .= $idLinkTo; } $sql.=' FROM oms_patient'; foreach ($fields as $field) { if (!$field->isMultiSelect() && $field->getTable('oms_patient')) { $sql .=' LEFT JOIN ' . $field->getTable('oms_patient') . ' ON ' . $field->getTable('oms_patient') . '.id = '.$field->getFieldName().' '; } } $sql.=' where oms_patient.id=' . $this->m_settings['e']; $result = mysql_query($sql) or die('Erreur SQL !<br>' . $sql . '<br>' . mysql_error()); $data = mysql_fetch_assoc($result); var_dump of $sql string(2663) "SELECT oms_patient.id,oms_patient.date,oms_patient.date_modif,date_modif,AES_DECRYPT(nom,"xxxxx") as "Nom",AES_DECRYPT("prenom","xxxxx") as "Prénom usuel",DATE_FORMAT(ddn, "%d/%m/%Y") as "Date de naissance",villeNaissance as "Lieu de naissance (ville)",CONCAT(oms_departement.libelle,"(",id_departement,")") as "Lieu de vie",CONCAT(oms_pays.libelle,"(",id_pays,")") as "Pays",CONCAT(patientsexe.libelle,"(",id_sexe,")") as "Sexe",CONCAT(patientprofession.libelle,"(",id_profession,")") as "Profession", IF"... can't go further to see what is in the output after the "..." <-- if you have an idea

    Read the article

  • Errors with parameter datatype in PostgreSql query

    - by John
    Im trying to execute a query to postgresql using the following code. It's written in C/C++ and I keep getting the following error when declaring a cursor: DECLARE CURSOR failed: ERROR: could not determine data type of parameter $1 Searching on here and on google, I can't find a solution. Can anyone find where I have made and error and why this is happening? thanks! void searchdb( PGconn *conn, char* name, char* offset ) { // Will hold the number of field in table int nFields; // Start a transaction block PGresult *res = PQexec(conn, "BEGIN"); if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("BEGIN command failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); printf("BEGIN command - OK\n"); //set the values to use const char *values[3] = {(char*)name, (char*)RESULTS_LIMIT, (char*)offset}; //calculate the lengths of each of the values int lengths[3] = {strlen((char*)name), sizeof(RESULTS_LIMIT), sizeof(offset)}; //state which parameters are binary int binary[3] = {0, 0, 1}; res = PQexecParams(conn, "DECLARE emprec CURSOR for SELECT name, id, 'Events' as source FROM events_basic WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, fsq_id, 'Venues' as source FROM venues_cache WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, geo_id, 'Cities' as source FROM static_cities WHERE name LIKE '$1::varchar%' OR FIND_IN_SET('$1::varchar%', alternate_names) != 0 LIMIT $2::int4 OFFSET $3::int4", 3, //number of parameters NULL, //ignore the Oid field values, //values to substitute $1 and $2 lengths, //the lengths, in bytes, of each of the parameter values binary, //whether the values are binary or not 0); //we want the result in text format // Fetch rows from table if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("DECLARE CURSOR failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); res = PQexec(conn, "FETCH ALL in emprec"); if (PQresultStatus(res) != PGRES_TUPLES_OK) { printf("FETCH ALL failed"); PQclear(res); exit_nicely(conn); } // Get the field name nFields = PQnfields(res); // Prepare the header with table field name printf("\nFetch record:"); printf("\n********************************************************************\n"); for (int i = 0; i < nFields; i++) printf("%-30s", PQfname(res, i)); printf("\n********************************************************************\n"); // Next, print out the record for each row for (int i = 0; i < PQntuples(res); i++) { for (int j = 0; j < nFields; j++) printf("%-30s", PQgetvalue(res, i, j)); printf("\n"); } PQclear(res); // Close the emprec res = PQexec(conn, "CLOSE emprec"); PQclear(res); // End the transaction res = PQexec(conn, "END"); // Clear result PQclear(res); }

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • Is there a more expressive way of executing SQL query using Qt?

    - by ShaChris23
    I currently have this code: // Construct query QString const statement = QString("drop database if exists %1") .arg(databaseName_); QSqlQuery query(db); query.exec(statement); Is there a better way to code than the above? Specifically, I dont like how I use QString for SQL statement. It'd be nice if Qt has some class so that I could do something like: // Construct query QSomeClass statement = "drop database if exists %1"; statement.setArg(1, databaseName_); // Replace all %1 in the original string. QSqlQuery query(db); query.exec(statement);

    Read the article

  • Model of hql query firing at back end by hql engine?

    - by Maddy.Shik
    I want to understand how hibernate execute hql query internally or in other models how hql query engine works. Please suggest some good links for same? One of reason for reading is following problem. Class Branch { //lazy loaded @joincolumn(name="company_id") Company company; } Since company is heavy object so it is lazy loaded. now i have hql query "from Branch as branch where branch.Company.id=:companyId" my concern is that if for firing above query, hql engine has to retrieve company object then its a performance hit and i would prefer to add one more property in Branch class i.e. companyId. So in this case hql query would be "from Branch as branch where branch.companyId=:companyId" If hql engine first generate sql from hql followed by firing of sql query itself, then there should be no performance issue. Please let me know if problem is not understandable.

    Read the article

  • Is there a limitation to the length of the query in mysql?

    - by Bakhtiyor
    I am asking this question because I need to know this limitation as I am generating SELECT query in my PHP script and the part of WHERE in this query is generated inside the loop. Precisely it looks like this $query="SELECT field_names FROM table_name WHERE "; $condition="metadata like \"%$uol_metadata_arr[0]%\" "; for($i=1; $i<count($uol_metadata_arr); $i++){ $condition.=" OR metadata like \"%$uol_metadata_arr[$i]%\" "; } $query.=$condition; $result=mysql_query($query); So, that's why I need to know how long my $query string can be, because the array *$uol_metadata_arr* could contain many items.

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Is there a setting in Exchange Server 2007 that we can set to make these headers propogate and be received by a POP/IMAP client?

    - by Ruruboy
    When using EWS Managed API to send Email via Exchange Server 2007. I noticed that MAPI clients like MS Outlook display all custom headers. But when I use POP3/IMAP clients like MS Outlook Express. I have noticed that these custom headers do not display in the message opened from MS Outlook Express. Is there a setting in Exchange Server 2007 that we can set to make these custom headers propagate and be received by a POP/IMAP client? Also why do custom headers in example below display up in lower case in MAPI clients like MS Outlook? But surprisingly if we use SMTPClient class to send email then these headers display as sent with Case Sensitive letters. eg. Header. Example of Headers received by a MAPI client like MS Outlook via Exchange Server 2007 Received: from EXMAILVS1.blabla.com ([192.168.191.136]) by cashtp02.blabla.com ([XXX.XXX.XX.XXX]) with mapi; Mon, 20 Dec 2010 12:17:05 -0800 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: asfsdf <[email protected]> To: asdsdf <[email protected]> Date: Mon, 20 Dec 2010 12:17:04 -0800 Subject: Please send me this header Thread-Topic: Please send me this header Thread-Index: AQHLoILek7g5cFgHQU6lHHfiKkdUMg== Message-ID: <[email protected]> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-Exchange-Organization-SCL: -1 X-MS-TNEF-Correlator: <[email protected]> customheader1: hello ali customheader2: hello Jace MIME-Version: 1.0

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • VPN - local and remote networks IP collision

    - by Guido García
    I have created a VPN connection in Windows using the New Network Connection wizard that comes with Windows. It works without problems in most places, but there is one concrete place where, despite the connection to the remote public IP works fine, it is not able to validate the login/password and establish the VPN connection. In this place, the network is 10.0.0.x (the same I use in other places where I am able to connect). The remote network is 192.168.x.x, so I suspect there is some kind of IP collision, because before connecting, a traceroute to i.e. 192.168.0.40 does not fail. 1 4 ms 1 ms 1 ms LINKSYS [10.0.0.1] 2 5 ms 1 ms 1 ms 172.26.27.1 3 4 ms 5 ms 3 ms 192.168.1.100 ... (more) I can't modify the local network further than the first router (10.0.0.1). That is the only different I've found so far. Any idea about how to solve it? Thank you.

    Read the article

  • VPN - local and remote networks IP collision

    - by Guido García
    I have created a VPN connection in Windows using the New Network Connection wizard that comes with Windows. It works without problems in most places, but there is one concrete place where, despite the connection to the remote public IP works fine, it is not able to validate the login/password and establish the VPN connection. In this place, the network is 10.0.0.x (the same I use in other places where I am able to connect). The remote network is 192.168.x.x, so I suspect there is some kind of IP collision, because before connecting, a traceroute to i.e. 192.168.0.40 does not fail. 1 4 ms 1 ms 1 ms LINKSYS [10.0.0.1] 2 5 ms 1 ms 1 ms 172.26.27.1 3 4 ms 5 ms 3 ms 192.168.1.100 ... (more) I can't modify the local network further than the first router (10.0.0.1). That is the only different I've found so far. Any idea about how to solve it? Thank you.

    Read the article

  • IPv6 works only after ping to routing box

    - by Ficik
    Situation: There is ipv4 only router in network and every computer is connected to it (wifi or cable). Server with ipv4 and ipv6 is connected to this router as well. Server has configured tunnelbrokers 6to4 tunnel and radvd. Clients in network has right prefix and can ping each other. But they can't ping to internet until they ping Server (the one with tunnel). I found somewhere that it's icmp problem, but I couldn't find solution. Is it problem that there is ipv4 only router? server and client runs linux router runs dd-wrt without ipv6 support :( Ping try: standa@standa-laptop:~$ ping6 ipv6.google.com PING ipv6.google.com(2a00:1450:8007::69) 56 data bytes ^C --- ipv6.google.com ping statistics --- 29 packets transmitted, 0 received, 100% packet loss, time 28223ms standa@standa-laptop:~$ ping6 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478 PING 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478(2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478) 56 data bytes 64 bytes from 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478: icmp_seq=1 ttl=64 time=3.55 ms 64 bytes from 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478: icmp_seq=2 ttl=64 time=0.311 ms 64 bytes from 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478: icmp_seq=3 ttl=64 time=0.269 ms 64 bytes from 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478: icmp_seq=4 ttl=64 time=0.292 ms ^C --- 2001:470:XXXX:XXXX:21c:c0ff:fe2b:6478 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.269/1.107/3.559/1.415 ms standa@standa-laptop:~$ ping6 ipv6.google.com PING ipv6.google.com(2a00:1450:8007::69) 56 data bytes 64 bytes from 2a00:1450:8007::69: icmp_seq=1 ttl=57 time=20.7 ms 64 bytes from 2a00:1450:8007::69: icmp_seq=2 ttl=57 time=20.2 ms 64 bytes from 2a00:1450:8007::69: icmp_seq=3 ttl=57 time=23.4 ms ^C --- ipv6.google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 20.267/21.479/23.413/1.392 ms

    Read the article

  • Problems setting up VLC Sever/client streaming

    - by Ayos
    I'm trying to set up a Linux machine as the server and a Windows XP machine as the client. Both machines are connected to the same local network via a Wi-Fi router. I setup the stream with the following properties : http stream port 8080 play locally And not much else. No firewall on the windows client(Windows firewall is disabled) When I try to open network stream via the client machine(Using VLC or Windows Media Player) I get the following errors: Media Player error code : 0xC00D11B3: Encountered a network Problem. VLC Console: main warning: connection timed out access_mms error: cannot connect to 192.168.1.3:8080 main debug: no access module matching "http" could be loaded main debug: TIMER module_need() : 12625.810 ms - Total 12625.810 ms / 1 intvls (Avg 12625.809 ms) main error: open of `http://192.168.1.3:8080' failed main debug: dead input main debug: repeating item main debug: starting playback of the new playlist item main debug: resyncing on http://192.168.1.3:8080 main debug: http://192.168.1.3:8080 is at 0 main debug: creating new input thread main debug: Creating an input for 'http://192.168.1.3:8080' main debug: using timeshift granularity of 50 MiB, in path 'C:\DOCUME~1\Accer\LOCALS~1\Temp' main debug: `http://192.168.1.3:8080' gives access `http' demux `' path `192.168.1.3:8080' main debug: creating demux: access='http' demux='' location='192.168.1.3:8080' file='\\192.168.1.3:8080' main debug: looking for access_demux module: 0 candidates main debug: no access_demux module matched "http" main debug: TIMER module_need() : 0.461 ms - Total 0.461 ms / 1 intvls (Avg 0.461 ms) main debug: creating access 'http' location='192.168.1.3:8080', path='\\192.168.1.3:8080' main debug: looking for access module: 2 candidates access_http debug: http: server='192.168.1.3' port=8080 file='' main debug: net: connecting to 192.168.1.3 port 8080 qt4 debug: IM: Deleting the input main debug: TIMER input launching for 'http://192.168.1.3:8080' : 13397.979 ms - Total 13397.979 ms / 1 intvls (Avg 13397.978 ms) qt4 debug: IM: Setting an input Need Help. Thanks in advance.

    Read the article

  • How to add the Windows defender(MS essential) in Windows explorer right click menu to scan a particular drive/folder on demand?

    - by avirk
    As we have inbuilt antivirus like Windows defender in Windows 8 now, I called it antivirus because it has embedded option of MS essential as well. But there is no option to scan a particular drive on demand by right click on it in windows explore as we had in Windows 7 with MS essential or like other antiviruses. I know we can run a custom scan for the particular drive or specific folder but that process is too lengthy and time consuming. This guide explain that how we can add the Windows Defender in the desktop right click menu, so I'm curious is there a way to add it in the Windows explorer right click menu to launch a search whenever I need to.

    Read the article

  • Is MS Forefront Add-in for Exchange server detecting HTML/Redirector.C incorrectly?

    - by rhart
    Users of a website hosted by our organization occasionally send complaints that our registration confirmation emails are infected with HTML/Redirector.C. They are always using an MS Exchange Server with the MS Forefront for Exchange AV add-in. The thing is, I don't think the detection is legitimate. I think the issue is that the link in the email we send causes a redirect. I should point out that this is done for a legitimate purpose. :) Has anybody run into this before? Naturally, Microsoft provides absolutely no good information on this one: http://www.microsoft.com/security/portal/Threat/Encyclopedia/Entry.aspx?Name=Trojan%3aHTML%2fRedirector.C&ThreatID=-2147358338 I can't find any other explanation of HTML/Redirector.C on the Internet either. If anyone knows of a real description for this virus that would be greatly appreciated as well.

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • PHP ORM style of querying

    - by Petah
    Ok so I have made an ORM library for PHP. It uses syntax like so: *(assume that $business_locations is an array)* Business::type(Business:TYPE_AUTOMOTIVE)-> size(Business::SIZE_SMALL)-> left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, Business::id())-> left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id())-> where(Business::location_id(), SQL::in($business_locations))-> group_by(Business::id())-> select(SQL::count(BusinessOwner::id()); Which can also be represented as: $query = new Business(); $query->set_type(Business:TYPE_AUTOMOTIVE); $query->set_size(Business::SIZE_SMALL); $query->left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, $query->id()); $query->left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id()); $query->where(Business::location_id(), SQL::in($business_locations)); $query->group_by(Business::id()); $query->select(SQL::count(BusinessOwner::id()); This would produce a query like: SELECT COUNT(`business_owners`.`id`) FROM `businesses` LEFT JOIN `business_owners` ON `business_owners`.`business_id` = `businesses`.`id` LEFT JOIN `owners` ON `owners`.`id` = `business_owners`.`owner_id` WHERE `businesses`.`type` = 'automotive' AND `businesses`.`size` = 'small' AND `businesses`.`location_id` IN ( 1, 2, 3, 4 ) GROUP BY `businesses`.`id` Please keep in mind that the syntax might not be prefectly correct (I only wrote this off the top of my head) Any way, what do you think of this style of querying? Is the first method or second better/clearer/cleaner/etc? What would you do to improve it?

    Read the article

  • How much the distance and ms can affect on the download speed ?

    - by Prix
    Let's consider A (client) and B (server) where A makes download from B. How much can a bad routing from A to B affect the download speed ? For example A does a tracert to B and get a response of 10 steps where the avg ms is around 300 with 10% packet loss at the 4 step and when the connection is normal the avg from A to B is 10 ~ 30 ms. Could this sort of impact reduce A download speed drasticaly or as long as both side and routes have enough link for the full speed of A from B and vice-versa it should maintain the same speed ? Besides tracert and the ping analyse of A to B what else is used to identify the problem ? If you need extra information please let me know.

    Read the article

  • Where to download MS SQL Server 2005 Developer Edition?

    - by Mark
    Just got put in charge of a big web project. All I know is the web server is running MS SQL 2005, so I need something comparable to test locally. I figure developer edition is my best bet because it offers everything that the enterprise edition does, but is for development purposes only. But this page is pretty worthless http://www.microsoft.com/sqlserver/2005/en/us/developer.aspx Where do I actually download it? What about SQL 2005 Express? Would that meet my needs? I can't figure out all the differences between these stupid MS products.

    Read the article

  • MS-Outlook add-on to move a new message to the same folder as the rest of the thread

    - by Guss
    I'm forced to use MS-Outlook in my job, while I very much like the feature that shows all the messages of a discussion thread (that are stored in different folders) in the inbox when a new message is received for that thread, if the previous messages are in a different data file (which I'm forced to have as the MS-Exchange server quota is very very small) then the message list only shows the name of the data file and not the name of the folder where the messages are stored. As a result, because I file my message by context (i.e. all the emails for project A go into a "Project A" folder, etc), and its important for me to have all the messages in a single thread in the same folder, it is sometimes hard to figure out into which folder should I file the new message. It would be great help if there was some add-on or VBA script that I could add to my setup that will offer a shortcut key or a button to "file this message to the same folder as the previous messages in the conversation thread".

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >