Search Results

Search found 18272 results on 731 pages for 'ldap query'.

Page 674/731 | < Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >

  • Listen to Response on HTML Form embedded in GWT View?

    - by confile
    I have a HTML like the following: <div> <form> <input type="text" /> <button class="sendForm" value="Send form" /> </form> </div> <script> // post the form with Jquery post // register a callback that handles the response </script> I use this type of form a lot with a JavaScript/JQuery overlay that displays the form. That could be handled for example with plugins like FancyBox. I also want to use this form embedded into a GWT view. Lets assume that the for cannot be created on client side because it has some server based markup language inside to set up some model data. If I want to use this form in GWT I have to do the following. Tell GWT the form request url and use a RequestBuilder to query the html content of this form. Then I can insert it into a div generated by GWT. So far so good. Problem: When the user hits the send button the response is handled my the JQuery callback that is inside the script under the form. Is there a way to access this callback from within GWT? Is there a way to overwrite the JQuery send action? Since, the code is HTML and comes from the server I cannot place ui-binder UiFields inside to get access to these DOM elements. I need to get the response if the submitted form accessible to GWT. Is there a way how I can achieve this with JSNI?

    Read the article

  • SQL putting two single quotes around datetime fields and fails to insert record

    - by user82613
    I am trying to INSERT into an SQL database table, but it doesn't work. So I used the SQL server profiler to see how it was building the query; what it shows is the following: declare @p1 int set @p1=0 declare @p2 int set @p2=0 declare @p3 int set @p3=1 exec InsertProcedureName @ConsumerMovingDetailID=@p1 output, @UniqueID=@p2 output, @ServiceID=@p3 output, @ProjectID=N'0', @IPAddress=N'66.229.112.168', @FirstName=N'Mike', @LastName=N'P', @Email=N'[email protected]', @PhoneNumber=N'(254)637-1256', @MobilePhone=NULL, @CurrentAddress=N'', @FromZip=N'10005', @MoveInAddress=N'', @ToZip=N'33067', @MovingSize=N'1', @MovingDate=''2009-04-30 00:00:00:000'', /* Problem here ^^^ */ @IsMovingVehicle=0, @IsPackingRequired=0, @IncludeInSaveologyPlanner=1 select @p1, @p2, @p3 As you can see, it puts a double quote two pairs of single quotes around the datetime fields, so that it produces a syntax error in SQL. I wonder if there is anything I must configure somewhere? Any help would be appreciated. Here is the environment details: Visual Studio 2008 .NET 3.5 MS SQL Server 2005 Here is the .NET code I'm using.... //call procedure for results strStoredProcedureName = "usp_SMMoverSearchResult_SELECT"; Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand(strStoredProcedureName); dbCommand.CommandTimeout = DataHelper.CONNECTION_TIMEOUT; database.AddInParameter(dbCommand, "@MovingDetailID", DbType.String, objPropConsumer.ConsumerMovingDetailID); database.AddInParameter(dbCommand, "@FromZip", DbType.String, objPropConsumer.FromZipCode); database.AddInParameter(dbCommand, "@ToZip", DbType.String, objPropConsumer.ToZipCode); database.AddInParameter(dbCommand, "@MovingDate", DbType.DateTime, objPropConsumer.MoveDate); database.AddInParameter(dbCommand, "@PLServiceID", DbType.Int32, objPropConsumer.ServiceID); database.AddInParameter(dbCommand, "@FromAreaCode", DbType.String, pFromAreaCode); database.AddInParameter(dbCommand, "@FromState", DbType.String, pFromState); database.AddInParameter(dbCommand, "@ToAreaCode", DbType.String, pToAreaCode); database.AddInParameter(dbCommand, "@ToState", DbType.String, pToState); DataSet dstSearchResult = new DataSet("MoverSearchResult"); database.LoadDataSet(dbCommand, dstSearchResult, new string[] { "MoverSearchResult" });

    Read the article

  • ASP.NET server data persistence

    - by Wayne Werner
    Hi, I'm not really sure exactly how the question should be phrased, so please be patient if I ask the wrong thing. I'm writing an ASP.NET application using VB as the code behind language. I have a data access class that connects to the DB to run the query (parameterized, of course), and another class to perform the validation tasks - I access this class from my aspx page. What I would like is to be able to store the data server side and wait for the user to choose from a few options based on the validity of the data. But unless my understanding is completely off, having persistent data objects on the server will give problems when multiple users connect? My ultimate goal is that once the data has been validated the end user can't modify it. Currently I'm validating the data, but I still have to retrieve it from the web form AFTER the user says OK, which obviously leaves open the possibility of injecting bad data either accidentally (unlikely) or on purpose (also unlikely for the use, but I'd prefer not to take the chance). So am I completely off in my understanding? If so, can someone point me to a resource that provides some instructions on keeping persistent data on the server, or provide instruction? Thanks!

    Read the article

  • c programming malloc question

    - by user535256
    Hello guys, Just got query regarding c malloc() function. I am read()ing x number of bytes from a file to get lenght of filename, like ' read(file, &namelen, sizeof(unsigned char)); ' . The variable namelen is a type unsigned char and was written into file as that type (1 byte). Now namelen has the lenght of filename ie namelen=8 if file name was 'data.txt', plus extra /0 at end, that working fine. Now I have a structure recording file info, ie filename, filelenght, content size etc. struct fileinfo { char *name; ...... other variable like size etc }; struct fileinfo *files; Question: I want to make that files.name variable the size of namelen ie 8 so I can successfully write the filename into it, like ' files[i].name = malloc(namelen) ' However, I dont want it to be malloc(sizeof(namelen)) as that would make it file.name[1] as the size of its type unsigned char. I want it to be the value thats stored inside variable &namelen ie 8 so file.name[8] so data.txt can be read() from file as 8 bytes and written straight into file.name[8? Is there a way to do this my current code is this and returns 4 not 8 files[i].name = malloc(namelen); //strlen(files[i].name) - returns 4 //perhaps something like malloc(sizeof(&namelen)) but does not work Thanks for any suggestions Have tried suggested suggestions guys, but I now get a segmentation fault error using: printf("\nsizeofnamelen=%x\n",namelen); //gives 8 for data.txt files[i].name = malloc(namelen + 1); read(file, &files[i].name, namelen); int len=strlen(files[i].name); printf("\nnamelen=%d",len); printf("\nname=%s\n",files[i].name); When I try to open() file with that files[i].name variable it wont open so the data does not appear to be getting written inside the read() &files[i].name and strlen() causes segemntation error as well as trying to print the filename

    Read the article

  • Using nHibernate to map two different data models to one entity model

    - by Dan
    I have two different data models that map to the same Car entity. I needed to create a second entity called ParkedCar, which is identical to Car (and therefore inherits from it) in order to stop nhibernate complaining that two mappings exists for the same entity. public class Car { protected Car() { IsParked = false; } public virtual int Id { get; set; } public bool IsParked { get; internal set; } } public class ParkedCar : Car { public ParkedCar() { IsParked = true; } //no additional properties to car, merely exists to support mapping and signify the car is parked } The only issue is that when I come to retrieve a Car from the database using the Criteria API like so: SessionProvider.OpenSession.Session.CreateCriteria<Car>() .Add(Restrictions.Eq("Id", 123)) .List<Car>(); The query brings back Car Entities that are from the ParkedCar data model. Its as if nhibernate defaults to the specialised entity. And the mappings are defiantly looking in the right place: <class name="Car" xmlns="urn:nhibernate-mapping-2.2" table="tblCar"> <class name="ParkedCar" xmlns="urn:nhibernate-mapping-2.2" table="tblParkedCar" > How do I stop this?

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • SQL connection to database repeating

    - by user175084
    ok now i am using the SQL database to get the values from different tables... so i make the connection and get the values like this: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT * FROM Machines", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@node", node); sqlDa.Fill(dt); connection.Close(); so this is one query on the page and i am calling many other queries on the page. So do i need to open and close the connection everytime...??? also if not this portion is common in all: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); can i like put it in one function and call it instead.. the code would look cleaner... i tried doing that but i get errors like: Connection does not exist in the current context. any suggestions??? thanks

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • A good data model for finding a user's favorite stories

    - by wings
    Original Design Here's how I originally had my Models set up: class UserData(db.Model): user = db.UserProperty() favorites = db.ListProperty(db.Key) # list of story keys # ... class Story(db.Model): title = db.StringProperty() # ... On every page that displayed a story I would query UserData for the current user: user_data = UserData.all().filter('user =' users.get_current_user()).get() story_is_favorited = (story in user_data.favorites) New Design After watching this talk: Google I/O 2009 - Scalable, Complex Apps on App Engine, I wondered if I could set things up more efficiently. class FavoriteIndex(db.Model): favorited_by = db.StringListProperty() The Story Model is the same, but I got rid of the UserData Model. Each instance of the new FavoriteIndex Model has a Story instance as a parent. And each FavoriteIndex stores a list of user id's in it's favorited_by property. If I want to find all of the stories that have been favorited by a certain user: index_keys = FavoriteIndex.all(keys_only=True).filter('favorited_by =', users.get_current_user().user_id()) story_keys = [k.parent() for k in index_keys] stories = db.get(story_keys) This approach avoids the serialization/deserialization that's otherwise associated with the ListProperty. Efficiency vs Simplicity I'm not sure how efficient the new design is, especially after a user decides to favorite 300 stories, but here's why I like it: A favorited story is associated with a user, not with her user data On a page where I display a story, it's pretty easy to ask the story if it's been favorited (without calling up a separate entity filled with user data). fav_index = FavoriteIndex.all().ancestor(story).get() fav_of_current_user = users.get_current_user().user_id() in fav_index.favorited_by It's also easy to get a list of all the users who have favorited a story (using the method in #2) Is there an easier way? Please help. How is this kind of thing normally done?

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • Round time to 5 minute nearest SQL Server

    - by Drako
    i don't know if it can be usefull to somebody but I went crazy looking for a solution and ended up doing it myself. Here is a function that (according to a date passed as parameter), returns the same date and approximate time to the nearest multiple of 5. It is a slow query, so if anyone has a better solution, it is welcome. A greeting. CREATE FUNCTION [dbo].[RoundTime] (@Time DATETIME) RETURNS DATETIME AS BEGIN DECLARE @min nvarchar(50) DECLARE @val int DECLARE @hour int DECLARE @temp int DECLARE @day datetime DECLARE @date datetime SET @date = CONVERT(DATETIME, @Time, 120) SET @day = (select DATEADD(dd, 0, DATEDIFF(dd, 0, @date))) SET @hour = (select datepart(hour,@date)) SET @min = (select datepart(minute,@date)) IF LEN(@min) > 1 BEGIN SET @val = CAST(substring(@min, 2, 1) as int) END else BEGIN SET @val = CAST(substring(@min, 1, 1) as int) END IF @val <= 2 BEGIN SET @val = CAST(CAST(@min as int) - @val as int) END else BEGIN IF (@val <> 5) BEGIN SET @temp = 5 - CAST(@min%5 as int) SET @val = CAST(CAST(@min as int) + @temp as int) END IF (@val = 60) BEGIN SET @val = 0 SET @hour = @hour + 1 END IF (@hour = 24) BEGIN SET @day = DATEADD(day,1,@day) SET @hour = 0 SET @min = 0 END END RETURN CONVERT(datetime, CAST(DATEPART(YYYY, @day) as nvarchar) + '-' + CAST(DATEPART(MM, @day) as nvarchar) + '-' + CAST(DATEPART(dd, @day) as nvarchar) + ' ' + CAST(@hour as nvarchar) + ':' + CAST(@val as nvarchar), 120) END

    Read the article

  • Storing a jpa entity where only the timestamp changes results in updates rather than inserts (desire

    - by David Schlenk
    I have a JPA entity that stores a fk id, a boolean and a timestamp: @Entity public class ChannelInUse implements Serializable { @Id @GeneratedValue private Long id; @ManyToOne @JoinColumn(nullable = false) private Channel channel; private boolean inUse = false; @Temporal(TemporalType.TIMESTAMP) private Date inUseAt = new Date(); ... } I want every new instance of this entity to result in a new row in the table. For whatever reason no matter what I do it always results in the row getting updated with a new timestamp value rather than creating a new row. Even tried to just use a native query to run an insert but channel's ID wasn't populated yet so I gave up on that. I've tried using an embedded id class consisting of channel.getId and inUseAt. My equals and hashcode for are: public boolean equals(Object obj){ if(this == obj) return true; if(!(obj instanceof ChannelInUse)) return false; ChannelInUse ciu = (ChannelInUse) obj; return ( (this.inUseAt == null ? ciu.inUseAt == null : this.inUseAt.equals(ciu.inUseAt)) && (this.inUse == ciu.inUse) && (this.channel == null ? ciu.channel == null : this.channel.equals(ciu.channel)) ); } /** * hashcode generated from at, channel and inUse properties. */ public int hashCode(){ int hash = 1; hash = hash * 31 + (this.inUseAt == null ? 0 : this.inUseAt.hashCode()); hash = hash * 31 + (this.channel == null ? 0 : this.channel.hashCode()); if(inUse) hash = hash * 31 + 1; else hash = hash * 31 + 0; return hash; } } I've tried using hibernate's Entity annotation with mutable=false. I'm probably just not understanding what makes an entity unique or something. Hit the google pretty hard but can't figure this one out.

    Read the article

  • Problems with string parameter insertion into prepared statement

    - by c0d3x
    Hi, I have a database running on an MS SQL Server. My application communicates via JDBC and ODBC with it. Now I try to use prepared statements. When I insert a numeric (Long) parameter everything works fine. When I insert a string parameter it does not work. There is no error message, but an empty result set. WHERE column LIKE ('%' + ? + '%') --inserted "test" -> empty result set WHERE column LIKE ? --inserted "%test%" -> empty result set WHERE column = ? --inserted "test" -> works But I need the LIKE functionality. When I insert the same string directly into the query string (not as a prepared statement parameter) it runs fine. WHERE column LIKE '%test%' It looks a little bit like double quoting for me, but I never used quotes inside a string. I use preparedStatement.setString(int index, String x) for insertion. What is causing this problem? How can I fix it? Thanks in advance.

    Read the article

  • Search field using Ultraseek

    - by tony noriega
    So i realized today that using IE to do a search on my site, for instance the term "documents" returns the search results. if i use FireFox or Chrome the data in the input field is not recognized... now i looked at the code, and realized that there are no tags around the input fields... BUT if i put them, then IE does not work... what the heck do i do? <div class="searchbox" id="searchbox"> <script type="text/ecmascript"> function RunSearch() { window.location = "http://searcher.example.com:8765/query.html?ql=&amp;col=web1&amp;qt=" + document.getElementById("search").value; } </script> <div class="formSrchr"> <input type="text" size="20" name="qt" id="search" /> <input type="hidden" name="qlOld" id="qlOld" value="" /> <input type="hidden" name="colOld" id="colOld value="web1" /> <input type="image" name="imageField" src="/_images/search-mag.gif" width="20" height="20" onclick="RunSearch();" /> </div> </div> <!-- /searchbox -->

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • ajax checkbox filter

    - by user1018298
    I need some help with a checkbox filter I am working in. I have four groups of checkboxes (type, vehicle, availability, price) and I would like to filter the content on the page based on the user's input. I can get the filter working OK but I am having issues with the groups. I can only seem to match one checkbox from each group rather than all checkboxes from each group. For example, a visitor can select 1 option in group 1, all options in group 2, 1 option in group three and 1 option in group 4. I am using ajax to build an SLQ query to return the correct filtered results from my database. here is my code: $('div.filters').delegate('input:checkbox', 'change', function() { var type = $('input.exp_type:checked').map(function () { return $(this).attr('value'); }).get().join(','); var vehicle = $('input.vehicle_type:checked').map(function () { return $(this).attr('value'); }).get().join(','); var avail = $('input.availability:checked').map(function () { return $(this).attr('value'); }).get().join(','); var price = $('input.price:checked').map(function () { return $(this).attr('value'); }).get().join(','); //alert($options); $.ajax({ type:"POST", url:"filtervouchers.php", data:"type="+type+"&vehicle="+vehicle+"&avail="+avail+"&price="+price, success:function(data){ $('.results').html(data); } });//end ajax }) and the php code: $type = mysql_real_escape_string($_POST['type']); $vehicles = mysql_real_escape_string($_POST['vehicle']); $avail = mysql_real_escape_string($_POST['avail']); $price = mysql_real_escape_string($_POST['price']); if ($type != '') { $sql2 = " AND exp_type IN ($type)"; } if ($vehicles != '') { $sql3 = " AND vehicle_type LIKE '%$vehicles%'"; } if ($avail != '') { $sql4 = " AND availability LIKE '%,' $avail% ','"; } if ($price != '') { $sql5 = " AND price_band IN ($price)"; } Can this be done using jquery?

    Read the article

  • How to disable a checkbox based on date using php?

    - by udaya
    I have iterated a table based on my query result... I have a column lastDate it contains a date value like 01/24/2010... Now i have to check the date value with the current date if currentdate is less than or equal to lastDate i have to enable the checkbox else disable it..... Any suggestion how it can be done in php... Here is my code <? if(isset($comment)) { echo '<tr><td class=table_label colspan=5>'.$comment.'</td></tr>'; } ?> <td align="center" class="table_label" id="txtsubdat"> <? $dd=$row['dDate']; if($dd=='') { } else { $str = $dd; $dd = strtotime ( $str ); echo date ( 'm/d/Y' , $dd ); } ?> </td> <td align="center"> <input type="checkbox" name="group" id="group" value="<?=$row['dAssignment_id']?>" onclick="checkdisplay(this);"> </td> <? } ?>

    Read the article

  • question about MySQL transaction and trigger

    - by WilliamLou
    I quickly browsed MySQL manual but didn't find the exact information about my question. Here is my question: if I have a InnoDB table A with two triggers triggered by 'AFTER INSERT ON A' and 'AFTER UPDATE ON A'. More specifically, For example: one trigger is defined as: CREATE TRIGGER test_trigger AFTER INSERT ON A FOR EACH ROW BEGIN INSERT INTO B SELECT * FROM A WHERE A.col1 = NEW.col1 END; You can ignore the query between BEGIN AND END, basically I mean this trigger will insert several rows into table B which is also a InnoDB table. Now, if I started a transaction and then insert many rows, say: 10K rows, into table A. If there is no trigger associated with table A, all these inserts are atomic, that's for sure. Now, if table A is associated with several insert/update triggers which insert/update many rows to table B and/or table C etc.. will all these inserts and/or updates are still all atomic? I think it's still atomic, but it's kind of difficult to test and I can't find any explanations in the Manual. Anyone can confirm this? Thanks a lot!

    Read the article

  • Linq, should I join those two queries together?

    - by 5YrsLaterDBA
    I have a Logins table which records when user is login, logout or loginFailed and its timestamp. Now I want to get the list of loginFailed after last login and the loginFailed happened within 24 hrs. What I am doing now is get the last login timestamp first. then use second query to get the final list. do you think I should join those two queris together? why not? why yes? var lastLoginTime = (from inRecord in db.Logins where inRecord.Users.UserId == userId && inRecord.Action == "I" orderby inRecord.Timestamp descending select inRecord.Timestamp).Take(1); if (lastLoginTime.Count() == 1) { DateTime lastInTime = (DateTime)lastLoginTime.First(); DateTime since = DateTime.Now.AddHours(-24); String actionStr = "F"; var records = from record in db.Logins where record.Users.UserId == userId && record.Timestamp >= since && record.Action == actionStr && record.Timestamp > lastInTime orderby record.Timestamp select record; }

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • What's the SQL function for timestamp manipulation?

    - by Eric Leroy
    I'm new to SQL and the time functions are different than mySQL so I'm having a terrible time finding a good site reference with USEFUL timestamp queries. I'm not able to locate the correct way of doing this query in SQL: Id Timestamp ----------------------------------- 1145744 2012-10-10 18:15:11.500 1145743 2012-10-10 18:15:11.313 1145742 2012-10-10 18:15:11.313 1145741 2012-10-10 18:15:11.253 1145740 2012-10-10 18:15:11.190 1145739 2012-10-10 18:15:11.190 1145738 2012-10-10 18:15:11.127 1145737 2012-10-10 18:15:11.067 1145736 2012-10-10 18:15:11.063 1145735 2012-10-10 18:15:10.940 1145734 2012-10-10 18:15:10.817 SELECT * from table WHERE Timestamp ... RANGE I need the range of 2 timestamps so I can select rows by the following parameters: second range minute range hour range day range week range month range year range Is there one function to put in 2 timestamps and get the range? or is this a mix of functions I need? Any good site references would be greatly appriceated. MSDN site isn't helping me isolate the proper way of doing this. I've been searching for about an hour trying to get the last day from 1:30PM to 1:30PM today.

    Read the article

  • How do you lock down & secure files stored on server in ASP.NET?

    - by Jon
    How do I go about securing files that are stored on the server? We have an ASP.NET app which generates PDFs. These are not stored in the wwwroot folder but in another folder i.e. C:\inetpub\data. This provides more security but maybe not enough. The ASP.NET/IIS process will need write access to this folder so it generate the PDFs there. Once the pdf is generated, it can be viewed using an ASP.NET form called viewpdf.aspx with the file to be viewed add to the query string like so viewpdf.aspx?FILE=mynewfile.pdf. This is loaded from a gridview. The full path to C:\inetpub\data is resolved and loaded in the Page_load event of the viewer page. Now I'm wondering how to secure this. Anybody could just view the file. Not by entering in the URL, as it won't been seen by IIS (its not in wwwroot), but could change the querystring in the viewpdf page. How do I stop anybody hacking this?

    Read the article

  • Is it possible to replace values in a queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • Mysql partitioning: Partitions outside of date range is included

    - by Sturlum
    Hi, I have just tried to configure partitions based on date, but it seems that mysql still includes a partition with no relevant data. It will use the relevant partition but also include the oldest for some reason. Am I doing it wrong? The version is 5.1.44 (MyISAM) I first added a few partitions based on "day", which is of type "date" ALTER TABLE ptest PARTITION BY RANGE(TO_DAYS(day)) ( PARTITION p1 VALUES LESS THAN (TO_DAYS('2009-08-01')), PARTITION p2 VALUES LESS THAN (TO_DAYS('2009-11-01')), PARTITION p3 VALUES LESS THAN (TO_DAYS('2010-02-01')), PARTITION p4 VALUES LESS THAN (TO_DAYS('2010-05-01')) ); After a query, I find that it uses the "old" partition, that should not contain any relevant data. mysql> explain partitions select * from ptest where day between '2010-03-11' and '2010-03-12'; +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | ptest | p1,p4 | range | day | day | 3 | NULL | 79 | Using where | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ When I select a single day, it works: mysql> explain partitions select * from ptest where day = '2010-03-11'; +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | 1 | SIMPLE | ptest | p4 | ref | day | day | 3 | const | 39 | | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+

    Read the article

  • What's an effective way to move data from one open browser tab to another?

    - by slk
    I am looking for a quick way to grab some data off of one Web page and throw it into another. I don't have access to the query string in the URL of the second page, so passing the data that way is not an option. Right now, I am using a Greasemonkey user script in tandem with a JS bookmarklet trigger: javascript:doIt(); // ==UserScript== // @include public_site // @include internal_site // ==/UserScript== if (document.location.host.match(internal_site)) { var datum1 = GM_getValue("d1"); var datum2 = GM_getValue("d2"); } unsafeWindow.doIt = function() { if(document.location.host.match(public_site)) { var d1 = innerHTML of page element 1; var d2 = innerHTML of page element 2; //Next two lines use setTimeout to bypass GM_setValue restriction window.setTimeout(function() {GM_setValue("d1", d1);}, 0); window.setTimeout(function() {GM_setValue("d2", d2);}, 0); } else if(document.location.host.match(internal_site)) { document.getElementById("field1").value = datum1; document.getElementById("field2").value = datum2; } } While I am open to another method, I would prefer to stay with this basic model if possible, as this is just a small fraction of the code in doIt() which is used on several other pages, mostly to automate date-based form fills; people really like their "magic button." The above code works, but there's an interruption to the workflow: In order for the user to know which page on the public site to grab data from, the internal page has to be opened first. Then, once the GM cookie is set from the public page, the internal page has to be reloaded to get the proper information into the internal page variables. I'm wondering if there's any way to GM_getValue() at bookmarklet-clicktime to prevent the need for a refresh. Thanks!

    Read the article

< Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >