Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 177/408 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • WPF & Linq To SQL binding ComboBox to foreign key

    - by ZeroDelta
    I'm having trouble binding a ComboBox to a foreign key in WPF using Linq To SQL. It works fine when displaying records, but if I change the selection on the ComboBox, that change does not seem to affect the property to which it is bound. My SQL Server Compact file has three tables: Players (PK is PlayerID), Events (PK is EventID), and Matches (PK is MatchID). Matches has FKs for the the other two, so that a match is associated with a player and an event. My window for editing a match uses a ComboBox to select the Event, and the ItemsSource is set to the result of a LINQ query to pull all of the Events. And of course the user should be able to select the Event based on EventName, not EventID. Here's the XAML: <ComboBox x:Name="cboEvent" DisplayMemberPath="EventName" SelectedValuePath="EventID" SelectedValue="{Binding Path=EventID, UpdateSourceTrigger=PropertyChanged}" /> And some code-behind from the Loaded event handler: var evt = from ev in db.Events orderby ev.EventName select ev; cboEvent.ItemsSource = evt.ToList(); var mtch = from m in db.Matches where m.PlayerID == ((Player)playerView.CurrentItem).PlayerID select m; matchView = (CollectionView)CollectionViewSource.GetDefaultView(mtch); this.DataContext = matchView; When displaying matches, this works fine--I can navigate from one match to the next and the EventName is shown correctly. However, if I select a new Event via this ComboBox, the CurrentItem of the CollectionView doesn't seem to change. I feel like I'm missing something stupid! Note: the Player is selected via a ListBox, and that selection filters the matches displayed--this seems to be working fine, so I didn't include that code. That is the reason for the "PlayerID" reference in the LINQ query

    Read the article

  • MD5CryptoServiceProvider ComputeHash Issues between VS 2003 and VS 2008

    - by owensoroke
    I have a database application that generates a MD5 hash and compares the hash value to a value in our DB (SQL 2K). The original application was written in Visual Studio 2003 and a deployed version has been working for years. Recently, some new machines on the .NET framework 3.5 have been having unrelated issues with our runtime. This has forced us to port our code path from Visual Studio 2003 to Visual Studio 2008. Since that time the hash produced by the code is different than the values in the database. The original call to the function posted in code is: RemoveInvalidPasswordCharactersFromHashedPassword(Text_Scrub(GenerateMD5Hash(strPSW))) I am looking for expert guidance as to whether or not the MD5 methods have changed since VS 2K3 (causing this point of failure), or where other possible problems may be originating from. I realize this may not be the best method to hash, but utimately any changes to the MD5 code would force us to change some 300 values in our DB table and would cost us a lot of time. In addition, I am trying to avoid having to redeploy all of the functioning versions of this application. I am more than happy to post other code including the RemoveInvalidPasswordCharactersFromHashedPassword function, or our Text_Scrub if it is necessary to recieve appropriate feedback. Thank you in advance for your input. Public Function GenerateMD5Hash(ByVal strInput As String) As String Dim md5Provider As MD5 ' generate bytes for the input string Dim inputData() As Byte = ASCIIEncoding.ASCII.GetBytes(strInput) ' compute MD5 hash md5Provider = New MD5CryptoServiceProvider Dim hashResult() As Byte = md5Provider.ComputeHash(inputData) Return ASCIIEncoding.ASCII.GetString(hashResult) End Function

    Read the article

  • Solr - DeltaImport doenst run the parentDeltaQuery

    - by rails
    I have 1:n relation between my main entity(PackageVersion) and its tag in my DB. I add a new tag with this date to the db at the timestamp and I run delta import command. the select retrieves the line but i dont see any other sql. Here are my data-config.xml configurations: <entity name="PackageVersion" pk="PackageVersionId" query= "select ... from [dbo].[Package] Package inner join [dbo].[PackageVersion] PackageVersion on Package.Id = PackageVersion.PackageId" deltaQuery = "select PackageVersion.Id PackageVersionId from [dbo].[Package] Package inner join [dbo].[PackageVersion] PackageVersion on Package.Id = PackageVersion.PackageId where Package.LastModificationTime > '${dataimporter.last_index_time}' OR PackageVersion.Timestamp > '${dataimporter.last_index_time}'" deltaImportQuery="select ... from [dbo].[Package] Package inner join [dbo].[PackageVersion] PackageVersion on Package.Id = PackageVersion.PackageId Where PackageVersionId=='${dih.delta.id}'" > <entity name="PackageTag" pk="ResourceId" processor="CachedSqlEntityProcessor" cacheKey="ResourceId" cacheLookup="PackageVersion.PackageId" query= "SELECT ResourceId,[Text] PackageTag from [dbo].[Tag] Tag" deltaQuery="SELECT ResourceId,[Text] PackageTag from [dbo].[Tag] Tag Where Tag.TimeStamp > '${dataimporter.last_index_time}'" parentDeltaQuery="select PackageVersion.PackageVersionId from [dbo].[Package] where Package.Id=${PackageTag.ResourceId}"> </entity> </entity>

    Read the article

  • ANDROID IF/ELSE FAILS CONTINUES TO EXECUTE JSON

    - by Keith Cesar Haizlett
    I am trying to create a Registration app with JSON to connect and post to MYSQL database. I created the following IF/ELSE statements to check for vacant input boxes, password match, and correct email characters before allowing it to be entered into the DATABASE. The code continues to execute the JSON posting even after the passwords don't match , invalid email characters are entered , and vacant text boxes are submitted. Why is it not returning and continuing to execute the JSON code? try { if (!inputEmail.getText().toString().matches("[a-zA-Z0-9._-]+@[a-z]+.[a-z]+") && email.length() > 0) { Toast.makeText(getApplicationContext(), "Enter Valid Email Address", Toast.LENGTH_LONG).show(); return; } else if(name.equals("") || email.equals("")|| password.equals("")||check.equals("")) { Toast.makeText(getApplicationContext(), "Field Vaccant", Toast.LENGTH_LONG).show(); return; } // check if both password matches else if(!password.equals(checkpass)) { Toast.makeText(getApplicationContext(), "Password does not match", Toast.LENGTH_LONG).show(); return; } if (json.getString(KEY_SUCCESS) != null) { registerErrorMsg.setText(""); String res = json.getString(KEY_SUCCESS); if(Integer.parseInt(res) == 1){ // user successfully registred // Store user details in SQLite Database DatabaseHandler db = new DatabaseHandler(getApplicationContext()); JSONObject json_user = json.getJSONObject("user"); // Clear all previous data in database userFunction.logoutUser(getApplicationContext()); db.addUser(json_user.getString(KEY_NAME), json_user.getString(KEY_EMAIL), json.getString(KEY_UID), json_user.getString(KEY_CREATED_AT)); // Launch Dashboard Screen Intent dashboard = new Intent(getApplicationContext(), DashboardActivity.class); // Close all views before launching Dashboard dashboard.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(dashboard); // Close Registration Screen finish(); }else{ // Error in registration registerErrorMsg.setText("User already Registered"); } } } catch (JSONException e) { } } });

    Read the article

  • Is there a way to specify a per-host deploy_to path with Capistrano?

    - by Chad Johnson
    I have searched and searched and asked a question already and have not received a clear answer. I have the following deploy script (snippet): set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain1.com", "domain2.com" role :app, "domain1.com", "domain2.com" role :db, "domain1.com", :primary => true, :norelease => true role :db, "domain2.com", :norelease => true As you see, I have set deploy_to to a specific path. And, I also have specified multiple web servers. However, each web server should have a different deployment path. I want to be able to run "cap deploy" and deploy to all hosts in one shot. I am NOT trying to deploy to staging and then to production. This is all production. My question is: how exactly do I specify a path per server? I have read the "Roles" documentation for Capistrano, and this is unclear. Can someone please post a deploy file example? I have read the documentation, and it is unclear how to do this. Does anyone know? Am I crazy? Am I thinking of this wrong or something? No answers anywhere online. Nowhere. Nothing. Please, someone help.

    Read the article

  • Lite-Javascript Gallery - Can I position the img absolutely in relationship to the lis?

    - by blackessej
    I have a lite-javascript run image gallery. The javascript grabs each element in the list and places it as a background in the parent element. Then the CSS styles the thumbnails as small blocks with a defined height/width. A click-event for each object toggles it’s child’s element’s visibility and adds an “active” class name to the . Using CSS, I'm trying to place the absolutely to make it appear at the same position for each thumb, but it's moving in relation to the thumbs. Here's the CSS: #jgal li { background-position:50% 50%; background-repeat:no-repeat; border:solid #999 4px; cursor:pointer; display:block; float:left; height:60px; width:60px; margin-bottom:14px; margin-right:14px; opacity:0.5; } #jgal li img { position:absolute; top:0px; left:210px; display:none; } And the site: http://www.erisdesigns.net Thanks in advance for any help!

    Read the article

  • CRM 2011 - Set/Retrieve work hours programmatically

    - by Philip Rich
    I am attempting to retrieve a resources work hours to perform some logic I require. I understand that the CRM scheduling engine is a little clunky around such things, but I assumed that I would be able to find out how the working hours were stored in the DB eventually... So a resource has associated calendars and those calendars have associated calendar rules and inner calendars etc. It is possible to look at the start/end and frequency of aforementioned calendar rules and query their codes to work out whether a resource is 'working' during a given period. However, I have not been able to find the actual working hours, the 9-5 shall we say in any field in the DB. I even tried some SQL profiling while I was creating a new schedule for a resource via the UI, but the results don't show any work hours passing to SQL. For those with the patience the intercepted SQL statement is below:- EXEC Sp_executesql N'update [CalendarRuleBase] set [ModifiedBy]=@ModifiedBy0, [EffectiveIntervalEnd]=@EffectiveIntervalEnd0, [Description]=@Description0, [ModifiedOn]=@ModifiedOn0, [GroupDesignator]=@GroupDesignator0, [IsSelected]=@IsSelected0, [InnerCalendarId]=@InnerCalendarId0, [TimeZoneCode]=@TimeZoneCode0, [CalendarId]=@CalendarId0, [IsVaried]=@IsVaried0, [Rank]=@Rank0, [ModifiedOnBehalfBy]=NULL, [Duration]=@Duration0, [StartTime]=@StartTime0, [Pattern]=@Pattern0 where ([CalendarRuleId] = @CalendarRuleId0)', N'@ModifiedBy0 uniqueidentifier,@EffectiveIntervalEnd0 datetime,@Description0 ntext,@ModifiedOn0 datetime,@GroupDesignator0 ntext,@IsSelected0 bit,@InnerCalendarId0 uniqueidentifier,@TimeZoneCode0 int,@CalendarId0 uniqueidentifier,@IsVaried0 bit,@Rank0 int,@Duration0 int,@StartTime0 datetime,@Pattern0 ntext,@CalendarRuleId0 uniqueidentifier', @ModifiedBy0='EB04662A-5B38-E111-9889-00155D79A113', @EffectiveIntervalEnd0='2012-01-13 00:00:00', @Description0=N'Weekly Single Rule', @ModifiedOn0='2012-03-12 16:02:08', @GroupDesignator0=N'FC5769FC-4DE9-445d-8F4E-6E9869E60857', @IsSelected0=1, @InnerCalendarId0='3C806E79-7A49-4E8D-B97E-5ED26700EB14', @TimeZoneCode0=85, @CalendarId0='E48B1ABF-329F-425F-85DA-3FFCBB77F885', @IsVaried0=0, @Rank0=2, @Duration0=1440, @StartTime0='2000-01-01 00:00:00', @Pattern0=N'FREQ=WEEKLY;INTERVAL=1;BYDAY=SU,MO,TU,WE,TH,FR,SA', @CalendarRuleId0='0A00DFCF-7D0A-4EE3-91B3-DADFCC33781D' The key parts in the statement are the setting of the pattern:- @Pattern0=N'FREQ=WEEKLY;INTERVAL=1;BYDAY=SU,MO,TU,WE,TH,FR,SA' However, as mentioned, no indication of the work hours set. Am I thinking about this incorrectly or is CRM doing something interesting around these work hours? Any thoughts greatly appreciated, thanks.

    Read the article

  • java: how to parse html-like xml

    - by Yang
    I have an html-like xml, basically it is html. I need to get the elements in each . Each element looks like this: <line tid="744476117"> <attr>1414</attr> <attr>31</attr><attr class="thread_title">title1</attr><attr>author1</attr><attr>date1</attr></line> My code is as below, it does recognize that there are 50 in the file, but it gives me NULLPointException when parsing NodeList fstNmElmntLst = fstElmnt.getElementsByTagName("attr"); Any idea why this is happening? The same code has been used for other applications without problems. DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(cleanxml)); Document doc = db.parse(is); doc.getDocumentElement().normalize(); System.out.println("Root element " + doc.getDocumentElement().getNodeName()); NodeList nodeLst = doc.getElementsByTagName("line"); for (int s = 0; s < nodeLst.getLength(); s++) { System.out.println(nodeLst.getLength()); Node fstNode = nodeLst.item(s); if (fstNode.getNodeType() == Node.ELEMENT_NODE) { Element fstElmnt = (Element) fstNode; NodeList fstNmElmntLst = fstElmnt.getElementsByTagName("attr"); Element fstNmElmnt = (Element) fstNmElmntLst.item(0); NodeList fstNm = fstNmElmnt.getChildNodes(); System.out.println("attr : " + ((Node) fstNm.item(0)).getNodeValue()); } }

    Read the article

  • Passing Extras and screen rotation

    - by Luis A. Florit
    This kind of questions appear periodically. Sorry if this has been covered before, but I'm a newbie and couldn't find the appropriate answer. It deals with the correct implementation of communication between classes and activities. I made a gallery app. It has 3 main activities: the Main one, to search for filenames using a pattern; a Thumb one, that shows all the images that matched the pattern as thumbnails in a gridview, and a Photo activity, that opens a full sized image when you click a thumb in Thumbs. I pass to the Photo activity via an Intent the filenames (an array), and the position (an int) of the clicked thumb in the gridview. This third Photo activity has only one view on it: a TouchImageView, that I adapted for previous/next switching and zooming according to where you shortclick on the image (left, right or middle). Moreover, I added a longclick listener to Photo to show EXIF info. The thing is working, but I am not happy with the implementation... Some things are not right. One of the problems I am experiencing is that, if I click on the right of the image to see the next in the Photo activity, it switches fine (position++), but when rotating the device the original one at position appears. What is happening is that Photo is destroyed when rotating the image, and for some reason it restarts again, without obeying super.onCreate(savedInstanceState), loading again the Extras (the position only changed in Photo, not on the parent activities). I tried with startActivityForResult instead of startActivity, but failed... Of course I can do something contrived to save the position data, but there should be something "conceptual" that I am not understanding about how activities work, and I want to do this right. Can someone please explain me what I am doing wrong, which is the best method to implement what I want, and why? Thanks a lot!!!

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • Event consumption in WPF

    - by webaloman
    I have a very simple app written in Silverlight for Windows Phone, where I try to use events. In my App.xaml.cs code behind I have implemented a GeoCoordinateWatcher which registers a gCWatche_PositionChanged method. This works ok, method is called after the position has been changed. What I want to do is fire an other event lets say DBUpdatedEvent after DB has been updated in the gCWatche_PositionChanged method. For this i delclared in the App.xaml.cs public delegate void DBUpdateEventHandler(object sender, EventArgs e); and I have in my App class: public event DBUpdateEventHandler DBUpdated; the event is fired like this in the end of gCWatche_PositionChanged method like this: OnDBUpdateEvent(new EventArgs()); and also I have declared : protected virtual void OnDBUpdateEvent(EventArgs e) { if (DBUpdated != null) { DBUpdated(this, e); } } Now I need to consume this event in my other Windows Phone app page which is a separate class PhoneApplicationPage. So I declared this method in this other Phone Page: public void DBHasBeenUpdated(object sender, EventArgs e) { Debug.WriteLine("DB UPDATE EVENT CATCHED"); } And in the constructor of this page I declared: DBUpdateEventHandler dbEH = new DBUpdateEventHandler(DBHasBeenUpdated); But when I test the application event is fired (OnDBUpdateEvent is called, but DBUpdated is null, therefore DBUpdated is not called - strange) and I have a problem that the other Phone Page is not catching the event at all... Any suggestions? How to catch that event. Thanks.

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • C# Interface Method calls from a controller

    - by ArjaaAine
    I was just working on some application architecture and this may sound like a stupid question but please explain to me how the following works: Interface: public interface IMatterDAL { IEnumerable<Matter> GetMattersByCode(string input); IEnumerable<Matter> GetMattersBySearch(string input); } Class: public class MatterDAL : IMatterDAL { private readonly Database _db; public MatterDAL(Database db) { _db = db; LoadAll(); //Private Method } public virtual IEnumerable<Matter> GetMattersBySearch(string input) { //CODE return result; } public virtual IEnumerable<Matter> GetMattersByCode(string input) { //CODE return results; } Controller: public class MatterController : ApiController { private readonly IMatterDAL _publishedData; public MatterController(IMatterDAL publishedData) { _publishedData = publishedData; } [ValidateInput(false)] public JsonResult SearchByCode(string id) { var searchText = id; //better name for this var results = _publishedData.GetMattersBySearch(searchText).Select( matter => new { MatterCode = matter.Code, MatterName = matter.Name, matter.ClientCode, matter.ClientName }); return Json(results); } This works, when I call my controller method from jquery and step into it, the call to the _publishedData method, goes into the class MatterDAL. I want to know how does my controller know to go to the MatterDAL implementation of the Interface IMatterDAL. What if I have another class called MatterDAL2 which is based on the interface. How will my controller know then to call the right method? I am sorry if this is a stupid question, this is baffling me.

    Read the article

  • Avoid slowdowns while using off-site database

    - by Anders Holmström
    The basic layout of my problem is this: Website (ASP.NET/C#) hosted at a dedicated hosting company (location 1) Company database (SQL Server) with records of relevant data (location 2). Location 1 & 2 connected through VPN. Customer visiting the website and wanting to pull data from the company database. No possibility of changing the server locations or layout (i.e. moving the website to an in-office server isn't possible). What I want to do is figure out the best way to handle the data acces in this case, minimizing the need for time-expensive database calls over the VPN. The first idea I'm getting is this: When a user enters the section of the website needing the DB data, you pull all the needed tables from the database into a in-memory dataset. All subsequent views/updates to the data is done on this dataset. When the user leaves (logout, session timeout, browser closed etc) the dataset gets sent to the SQL server. I'm not sure if this is a realistic solution, and it obviously has some problems. If two web visitors are performing updates on the same data, the one finishing up last will have their changes overwriting the first ones. There's also no way of knowing you have the latest data (i.e. if a customer pulls som info on their projects and we update this info while they are viewing them, they won't see these changes PLUS the above overwriting issue will arise). The other solution would be to somehow aggregate database calls and make sure they only happen when you need them, e.g. during data updates but not during data views. But then again the longer a pause between these refreshing DB calls, the bigger a chance that the data view is out of date as per the problem described above. Any input on the above or some fresh ideas would be most welcome.

    Read the article

  • I have a question about variable release in global class.

    - by Beomseok
    + (void)findAndCopyOfDatabaseIfNeeded{ NSArray *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [path objectAtIndex:0]; NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *databasePath = [documentsDirectory stringByAppendingPathComponent:@"DB"]; BOOL success = [fileManager fileExistsAtPath:databasePath]; if(!success){ NSString *resourcePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"DB"]; [fileManager copyItemAtPath:resourcePath toPath:databasePath error:NULL]; } NSString *tracePath = [documentsDirectory stringByAppendingPathComponent:@"Trace"]; BOOL traceDir = [fileManager fileExistsAtPath:tracePath]; if(!traceDir){ NSString *resourcePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"Trace"]; [fileManager copyItemAtPath:resourcePath toPath:tracePath error:NULL]; } NSDateFormatter *dateFormatter = [[NSDateFormatter alloc]init]; [dateFormatter setDateFormat:@"yyyy"]; NSDate *today = [[NSDate alloc]init]; NSString *resultYear = [dateFormatter stringFromDate:today]; NSString *traceYearPath = [tracePath stringByAppendingPathComponent:resultYear]; BOOL yearDir = [fileManager fileExistsAtPath:tracePath]; if (!yearDir) { [fileManager createDirectoryAtPath:traceYearPath attributes:nil]; } //[resultYear release]; ? //[today release]; ? //[dateFormatter release]; ? } I'm using global class like this [ + (void)findAndCopyOfDatabaseIfNeeded ]. hm,, I don't know NSArray, NSString and NSFileManager are released. Variable release or Not release ? please advice for me.

    Read the article

  • jQuery UI Autocomplete and CodeIgniter

    - by Kere Puki
    I am trying to implement a simple autocomplete script using jQuery UI and CodeIgniter 2 but my model keeps telling me there is an undefined variable so I dont know if my setup is right. My view $(function() { $("#txtUserSuburb").autocomplete({ source: function(request, response){ $.ajax({ url: "autocomplete/suggestions", data: { term: $("#txtUserSuburb").val() }, dataType: "json", type: "POST", success: function(data){ response(data); } }); }, minLength: 2 }); }); My controller function suggestions(){ $this->load->model('autocomplete_model'); $term = $this->input->post('term', TRUE); $rows = $this->autocomplete_model->getAutocomplete($term); echo json_encode($rows); } My Model function getAutocomplete() { $this->db->like('postcode', $term, 'after'); $query = $this->db->get('tbl_postcode'); $keywords = array(); foreach($query->result() as $row){ array_push($keywords, $row->postcode); } return $keywords; } There arent any errors except it doesn't seem to be passing the $term variable to the model.

    Read the article

  • How to call Java code from Javascript and assign a value to a JSP page?

    - by Frank
    I have the following "form.jsp" program, it generates a drop down list, below the list is a textarea to show the display_name of a selected item, now when user selected a item, it shows the selected item id in the textarea, how to call the DB from my code and get the display_name in the javascript so the result display_name will be shown in the textarea ? <%@ taglib prefix="s" uri="/struts-tags"%> <script type="text/javascript"> function callme(Display_Name) { alert('callme : Display_Name = '+Display_Name); var v=document.getElementById('hiddenValue').value; alert('hiddenValue : v = '+v); document.getElementById('defaultDisplayName').value=Display_Name; } </script> <s:hidden id="pricelist.id" name="pricelist.id" value="%{pricelist.id}"/> <div class="dialog"> <table> <tbody> <s:if test="%{enableProductList}"> <tr class="prop"> <td valign="top" class="name required"><label for="description">Product:</label></td> <td valign="top"> <s:select id="productPrice.product" name="productPrice.product" headerKey="0" headerValue="-- Select Product --" list="products" listKey="id" listValue="name" value="productPrice.product.id" theme="simple" displayName1='value' onchange="callme(value)" /> <s:hidden id="hiddenValue" name="hiddenValue" value="123"/> </td> </tr> </s:if> <tr class="prop"> <td valign="top" class="name"><label for="description">Default Display Name:</label></td> <td valign="top"><s:textarea id="defaultDisplayName" name="defaultDisplayName" theme="simple" readonly="true"/></td> </tr> See attached image for details, in the DB, a product table has the product Id and display_name, I know the Id, how to use Java to get the display_name and plug it into the jsp ?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • could not execute a stored procedure(using DAAB) from a client(aspx page) to a wcf service

    - by user1144695
    i am trying to store data to sql database from a asp.net client website through a stored procedure(using DAAB) in a wcf service hosted in a asp.net empty website.When i try to store data to the DB i get the following error: ** - The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework SDK documentation and inspect the server trace logs. ** When i try to debug i get the following exception: Activation error occured while trying to get instance of type Database, key "" in the code-- Database db = EnterpriseLibraryContainer.Current.GetInstance<Database>("MyInstance"); where my app.config is <?xml version="1.0"?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true"/> </configSections> <dataConfiguration defaultDatabase="MyInstance"/> <connectionStrings> <add name="MyInstance" connectionString="Data Source=BLRKDAS307581\KD;Integrated Security=True;User ID=SAPIENT\kdas3;Password=ilove0LINUX" providerName="System.Data.SqlClient" /> </connectionStrings> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> Can anyone help me with it? Thanks in advance...

    Read the article

  • Store multiple values in a session variable

    - by user458790
    Hi, Before I ask my doubt, please consider this scenario that a user on my website have a profileID. With this profileID are associated some pluginID's. For eg: User1 might have 2, 3 and 5 plugins associated with its profile. When the user logs in, I store the profileID of the user in a session variable cod. At a certain page, the user tries to edit the plugins associated with his profile. So, on that page, I have retrieve those pluginID's from the DB. I have applied this code but this fetches only the maximum pluginID from the DB and not all the pluginID's. SqlCommand cmd1 = new SqlCommand("select plugin_id from profiles_plugins where id=(select id from profiles_plugins where profile_id=" + Convert.ToInt32(Session["cod"]) + ")", con); SqlDataReader dr1 = cmd1.ExecuteReader(); if (dr1.HasRows) { while (dr1.Read()) { Session["edp1"] = Convert.ToInt32(dr1[0]); } } dr1.Close(); cmd1.Dispose(); I was trying to figure out how can I store multiple pluginID's in this session variable? Thanks

    Read the article

  • mysql data read returning 0 rows from class

    - by Neo
    I am implementing a database manager class within my app, mainly because there are 3 databases to connect to one being a local one. However the return function isn't working, I know the query brings back rows but when it is returned by the class it has 0. What am I missing? public MySqlDataReader localfetchrows(string query, List<MySqlParameter> dbparams = null) { using (var conn = connectLocal()) { Console.WriteLine("Connecting local : " + conn.ServerVersion); MySqlCommand sql = conn.CreateCommand(); sql.CommandText = query; if (dbparams != null) { if (dbparams.Count > 0) { sql.Parameters.AddRange(dbparams.ToArray()); } } MySqlDataReader reader = sql.ExecuteReader(); Console.WriteLine("Reading data : " + reader.HasRows + reader.FieldCount); return reader; /* using (MySqlCommand sql = conn.CreateCommand()) { sql.CommandText = query; if (dbparams != null) { if (dbparams.Count > 0) { sql.Parameters.AddRange(dbparams.ToArray()); } } MySqlDataReader reader = sql.ExecuteReader(); Console.WriteLine("Reading data : " + reader.HasRows + reader.FieldCount); sql.Parameters.Clear(); return reader; }*/ } } And the code to get the results query = @"SELECT jobtypeid, title FROM jobtypes WHERE active = 'Y' ORDER BY title ASC"; //parentfrm.jobtypes = db.localfetchrows(query); var rows = db.localfetchrows(query); Console.WriteLine("Reading data : " + rows.HasRows + rows.FieldCount); while (rows.Read()){ } These scripts return the following : Connecting local : 5.5.16 Reading data : True2 Reading data : False0

    Read the article

  • What database options do I have for the Blackberry?

    - by peeping-jane
    I notice most of the discussions about Blackberry database options are old, and generally not too informative. As of today, March 31st, 2010, what is the best, most universally supported, free database option available for Blackberry developers? I heard SQLite is available for JDE v5, but last I checked, that was still in beta, and I didn't want to commit to developing on a system that is not supported by most of the phones in service. Thing is, I don't see any dates on these claims. For all I know, the announcements I am reading are from 2008. So, I am still on v 4.7. I need to use a relational DB for the app I am developing, but there aren't many resources for DB handling available - or at least resources that are useful to me. I find a lot of "tutorials" that assume you know everything there is to know about Blackberry development, or Java. But no complete classes or anything. Many of these examples don't even work. Eclipse gives warnings and errors from code copied and pasted from other people's examples. I can answer any questions that may assist in this case. Hopefully, this thread will help many BB developers in the future.

    Read the article

  • Is there any time estimation about sqlite3's open speed?

    - by sxingfeng
    I am using SQLite3 in C++, I found the opening time of sqlite seems unstable at the first time( I mean ,open windows and open the db at the first time) It takes a long tiom on 50M db, about 10s in windows? and vary on different times. Has any one met the same problem? I am writting an desktop application in windows, so the openning speed is really important for me. Thanks in advance! int nRet; #if defined(_UNICODE) || defined(UNICODE) nRet = sqlite3_open16(szFile, &mpDB); // not tested under window 98 #else // For Ansi Version //*************- Added by Begemot szFile must be in unicode- 23/03/06 11:04 - **** OSVERSIONINFOEX osvi; ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX)); osvi.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); GetVersionEx ((OSVERSIONINFO *) &osvi); if ( osvi.dwMajorVersion == 5) { WCHAR pMultiByteStr[MAX_PATH+1]; MultiByteToWideChar( CP_ACP, 0, szFile, _tcslen(szFile)+1, pMultiByteStr, sizeof(pMultiByteStr)/sizeof(pMultiByteStr[0]) ); nRet = sqlite3_open16(pMultiByteStr, &mpDB); } else nRet = sqlite3_open(szFile,&mpDB); #endif //************************* if (nRet != SQLITE_OK) { LPCTSTR szError = (LPCTSTR) _sqlite3_errmsg(mpDB); throw CppSQLite3Exception(nRet, (LPTSTR)szError, DONT_DELETE_MSG); } setBusyTimeout(mnBusyTimeoutMs);

    Read the article

  • Fasted way to develop data entry screens for a .NET backend ?

    - by jay23
    I am a .NET / C# back end guy. I am working on a app that will have about 200 different data entry screens. For me exposing DTO as a collection for CRUD (IUpdatable and IQueryable) is the easy part, can do it in sleep :-). What I am trying to decide is what type of front end technology will allow me to develop these data entry screens fast. They don't have to be fancy but they are not just plain grid either and on average they have about 15 form fields and some client side data validation (no db look up) Options I am looking at are Use ExtJS on the front and REST / JSON on the back. ASP.NET RIA but I do not know SL (Well XAML) Plain ASP.NET / MVC One idea I had was the DTO will contain the meta data about the form (As Attributes) and the form can be dynamically generated, but i do not want to reinvent the wheel if their is an easy way. I have looked at RAD software but all of them look at the DB and generate screens. I rather want some thing that can look at my DTO and generate screens. Jay

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >