Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 182/210 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Regex expression is too greedy

    - by alastairs
    I'm writing a regular expression to match data from the IMDb soundtracks data file. My regexes are mostly working, although they are in places slurping too much text into my named groups. Take the following regex for example: "^ Performed by '?(?<performer>.*)('? \(qv\))?$" The performer group includes the string ' (qv) as well as the performer's name. Unfortunately, because the records are not consistently formatted, some performers' names are surrounded by single quotation marks whilst others are not. This means they are optional as far as the regex is concerned. I've tried marking the last group as a greedy group using the ?> group specifier, but this appeared to have no effect on the results. I can improve the results by changing the performer group to match a small range of characters, but this reduces my chances of parsing the name out correctly. Furthermore, if I were to just exclude the apostrophe character, I would then be unable to parse, e.g., band names containing apostrophes, such as Elia's Lonely Friends Band who performed Run For Your Life featured in Resident Evil: Apocalypse.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • Rails - any fancy ways to handle 404s?

    - by jyoseph
    I have a rails app I built for an old site I converted from another cms (in a non-rails language, hehe). Most of the old pages are mapped to the new pages using routes.rb. But there are still a few 404s. I am a rails newb so I'm asking if there are any advanced ways to handle 404s. For example, if I was programming in my old language I'd do this: Get the URL (script_name) that was being accessed and parse it. Do a lookup in the database for any keywords, ids, etc found in the new URL. If found, redirect to the page (or if multiple records are found, show them all on a results page and let user choose). With rails I'd probably want to do :status = :moved_permanently I'm guessing? If not found, show a 404. Are there any gems/plugins or tutorials you know of that would handle such a thing, if it's even possible. Or can you explain on a high level how that can be done? I don't need a full code sample, just a push in the right direction. PS. It's a simple rails 3 app that uses a single Content model.

    Read the article

  • Dynamically generate client-side HTML form control using JavaScript and server-side Python code in Google App Engine

    - by gisc
    I have the following client-side front-end HTML using Jinja2 template engine: {% for record in result %} <textarea name="remark">{{ record.remark }}</textarea> <input type="submit" name="approve" value="Approve" /> {% endfor %} Thus the HTML may show more than 1 set of textarea and submit button. The back-end Python code retrieves a variable number of records from a gql query using the model, and pass this to the Jinja2 template in result. When a submit button is clicked, it triggers the post method to update the record: def post(self): if self.request.get('approve'): updated_remark = self.request.get('remark') record.remark = db.Text(updated_remark) record.put() However, in some instances, the record updated is NOT the one that correspond to the submit button clicked (eg if a user clicks on record 1 submit, record 2 remark gets updated, but not record 1). I gather that this is due to the duplicate attribute name remark. I can possibly use JavaScript/jQuery to generate different attribute names. The question is, how do I code the back-end Python to get the (variable number of) names generated by the JavaScript? Thanks.

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • need help optimizing oracle query

    - by deming
    I need help in optimizing the following query. It is taking a long time to finish. It takes almost 213 seconds . because of some constraints, I can not add an index and have to live with existing ones. INSERT INTO temp_table_1 ( USER_ID, role_id, participant_code, status_id ) WITH A AS (SELECT USER_ID user_id,ROLE_ID, STATUS_ID,participant_code FROM USER_ROLE WHERE participant_code IS NOT NULL), --1 B AS (SELECT ROLE_ID FROM CMP_ROLE WHERE GROUP_ID = 3), C AS (SELECT USER_ID FROM USER) --2 SELECT USER_ID,ROLE_ID,PARTICIPANT_CODE,MAX(STATUS_ID) FROM A INNER JOIN B USING (ROLE_ID) INNER JOIN C USING (USER_ID) GROUP BY USER_ID,role_id,participant_code ; --1 = query when ran alone takes 100+ seconds --2 = query when ran alone takes 19 seconds DELETE temp_table_1 WHERE ROWID NOT IN ( SELECT a.ROWID FROM temp_table_1 a, USER_ROLE b WHERE a.status_id = b.status_id AND ( b.ACTIVE IN ( 1 ) OR ( b.ACTIVE IN ( 0,3 ) AND SYSDATE BETWEEN b.effective_from_date AND b.effective_to_date )) ); It seems like the person who wrote the query is trying to get everything into a temp table first and then deleting records from the temp table. whatever is left is the actual results. Can't it be done such a way that there is no need for the delete? We just get the results needed since that will save time?

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • How to provide an inline model field with a queryset choices without losing field value for inline r

    - by Judith Boonstra
    The code displayed below is providing the choices I need for the app field, and the choices I need for the attr field when using Admin. I am having a problem with the attr field on the inline form for already saved records. The attr selected for these saved does show in small print above the field, but not within the field itself. # MODELS: Class Vocab(models.Model): entity = models.Charfield, max_length = 40, unique = True) Class App(models.Model): name = models.ForeignKey(Vocab, related_name = 'vocab_appname', unique = True) app = SelfForeignKey('self, verbose_name = 'parent', blank = True, null = True) attr = models.ManyToManyField(Vocab, related_name = 'vocab_appattr', through ='AppAttr' def parqs(self): a method that provides a queryset consisting of available apps from vocab, excluding self and any apps within the current app's dependent line. def attrqs(self): a method that provides a queryset consisting of available attr from vocab excluding those already selected by current app, 2) those already selected by any apps within the current app's parent line, and 3) those selected by any apps within the current app's dependent line. Class AppAttr(models.Model): app = models.ForeignKey(App) attr = models.ForeignKey(Vocab) # FORMS: from models import AppAttr def appattr_form_callback(instance, field, *args, **kwargs) if field.name = 'attr': if instance: return field.formfield(queryset = instance.attrqs(), *kwargs) return field.formfield(*kwargs) # ADMIN: necessary imports class AppAttrInline(admin.TabularInline): model = AppAttr def get_formset(self, request, obj = None, **kwargs): kwargs['formfield_callback'] = curry(appattr_form_callback, obj) return super(AppAttrInline, self).get_formset(request, obj, **kwargs) class AppForm(forms.ModelForm): class Meta: model = App def __init__(self, *args, **kwargs): super(AppForm, self).__init__(*args, **kwargs) if self.instance.id is None: working = App.objects.all() else: thisrec = App.objects.get(id = self.instance.id) working = thisrec.parqs() self.fields['par'].queryset = working class AppAdmin(admin.ModelAdmin): form = AppForm inlines = [AppAttrInline,] fieldsets = .......... necessary register statements

    Read the article

  • Applying correct bindings to WPF datatemplate to maximize reusability

    - by johncatfish
    Hi. I have a WPF application. I want to apply that datatemplate to a Listbox filled with records from Table02. Then, for each listboxitem I need to bind the combobox to the same database table (Table01), but for each listboxitem the selected item will vary. It will be a foreign key to Table01. <DataTemplate x:Key="Table01DataTemplate"> <Grid> <ComboBox ItemsSource="{Binding Model.IQueryable_Table01, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Window}}}" SelectedValue="{Binding Table01_ForeignKey}" DisplayMemberPath="name" SelectedValuePath="id" /> <!-- Other stuff --> </Grid> </DataTemplate> <ListBox x:Name="lbTest" ItemTemplate="{StaticResource Table01DataTemplate}" /> <!-- In .cs file lbTest.DataContext = this; --> Notes: Model.IQueryable_Table01 is a property which encapsulates a Linq-to-sql call returning a IQueryable. lbTest will be filled by setting ItemsSource with a Linq-to-sql call. Is this a good way to do the bindings in a data template for a maximum reusability? I also thought of replacing AncestorType={x:Type Window} with lbTest.DataContext = this; AncestorType={x:Type Application} and lbTest.DataContext = App.Current; But it didn't work (Exception on loading) and I don't know if there's any caveats or down-sides to this approach. Is this good? Any suggestions or improvements? Thanks.

    Read the article

  • How to disable the button based on Grid rows count using jquery

    - by kumar
    Hello friends, I have a button on the view.. I need to make disable based row count in the jquery grid.. that is if Rows are comming lessthen 100 I need to make Disable that button? any idea about this? here is my code.. <script type="text/javascript"> $(document).ready(function() { var RegisterGridEvents = function(excGrid) { //Register column chooser $(excGrid).jqGrid('navButtonAdd', excGrid + '_pager', { caption: "Columns", title: "Reorder Columns", onClickButton: function() { $(excGrid).jqGrid('columnChooser'); } }); $(".ui-pg-selbox").hide(); $('.ui-jqgrid-htable th:first').prepend('Select All').attr('style', 'font-size:7px'); //Register grid resize $(excGrid).jqGrid('gridResize', { minWidth: 350, maxWidth: 1500, minHeight: 400, maxHeight: 12000 }); }; $('#specialist-tab').tabs("option", "disabled", [2, 3, 4]); $('.button').button(); RegisterButtonEvents(); RegisterGridEvents("#ExceptionsGrid") }); **var a = $("#ExceptionsGrid").jqGrid('getGridParam', 'records').toString(); alert(a);** </script> is this right where I am doing this? thanks

    Read the article

  • Extjs - Loading Grid when call

    - by Oxi
    I have form and grid. the user must enter data in form fields then display related records in the grid. I want to implement a search form, e.g: user will type the name and gender of the student, then will get a grid of all students have the same name and gender. So, I use ajax to send form fields value to PHP and then creat a json_encode wich will be used in grid store. I am really not sure if my idea is good. But I haven't found another way to do that. The problem is when I set autoLoad to true in the store, the grid automatically filled with all data - not just what I asked for - So, I understand that I have to set autoLoad to false, but then the result not shown in the grid even it returned successfully in the firebug! I don't know what to do. My View: { xtype: 'panel', layout: "fit", id: 'searchResult', flex: 7, title: '<div style="text-align:center;"/>SearchResultGrid</div>', items: [ { xtype: 'gridpanel', store: 'advSearchStore', id: 'AdvSearch-grid', columns: [ { xtype: 'gridcolumn', dataIndex: 'name', align: 'right', text: 'name' }, { xtype: 'gridcolumn', dataIndex: 'gender', align: 'right', text: 'gender' } ], viewConfig: { id : 'Arr' ,emptyText: 'noResult' }, requires: ['MyApp.PrintSave_toolbar'], dockedItems: [ { xtype: 'PrintSave_tb', dock: 'bottom', } ] } ] }, My Store and Model: Ext.define('AdvSearchPost', { extend: 'Ext.data.Model', proxy: { type: 'ajax', url: 'AdvSearch.php', reader: { type: 'json', root: 'Arr', totalProperty: 'totalCount' } }, fields: [ { name: 'name'}, { name: 'type_and_cargo'} ] }); Ext.create('Ext.data.Store', { pageSize: 10, autoLoad: false, model: 'AdvSearchPost', storeId: 'AdvSearchPost' });

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • Forcing user to new page in php. (PHP newbie)

    - by JohnC
    Hello I'm a newbie web programmer. My background is writing Windows applications with sql. I'm putting together my 1st data entry screens in Php. I have a search form that links to a form that displays records in a grid. On each row of the grid I have a delete url to allow the user to remove a record. This links to a form delete.php (which calls the sql to remove the record). Ideally I would like to automatically take the user back to the search form rather than forcing the user to click on a link to do so. I have used ob_start with the header to do this elsewhere but cannot get it to work on this page. Is there another way to do it? (Using php 5 as part of LAMP) file delete.php <?php $id = $_GET['recordID']; //ob_start(); require_once('connections/local.php'); mysql_select_db($database_local, $local); mysql_query("DELETE FROM user_access WHERE id = {$id}") or die(mysql_error()); echo("Record ".$id." deleted"); echo("<br>"); //header("location:http://localhost/search7.htm); //ob_flush(); echo("<a href=\"http://localhost/search7.htm\">Search for Members</a>"); ?>

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Fastest way to become a MySQL expert?

    - by Kerry
    I have been using MySQL for years, mainly on smaller projects until the last year or so. I'm not sure if it's the nature of the language or my lack of real tutorials that gives me the feeling of being unsure if what I'm writing is the proper way for optimization purposes and scaling purposes. While self-taught in PHP I'm very sure of myself and the code I write, easily can compare it to others and so on. With MySQL, I'm not sure whether (and in what cases) an INNER JOIN or LEFT JOIN should be used, nor am I aware of the large amount of functionality that it has. While I've written code for databases that handled tens of millions of records, I don't know if it's optimum. I often find that a small tweak will make a query take less than 1/10 of the original time... but how do I know that my current query isn't also slow? I would like to become completely confident in this field in the ability to optimize databases and be scalable. Use is not a problem -- I use it on a daily basis in a number of different ways. So, the question is, what's the path? Reading a book? Website/tutorials? Recommendations?

    Read the article

  • JTable data only shown after scrolling

    - by Christian 'fuzi' Orgler
    I wrote a method, that creates my DefaultTableModel and there I'm going to add my records. When I set the model to my JTable, the data rows are blank. After scrolling the data gets displayed correct. How can I avoid this and display the data from the first moment? EDIT: I imported the javax.swing.table.DefaultTableModel -- is this correct? private DefaultTableModel _dtm; private void loadTable(Vector<Member> members) { loadTableModel(); try { lbl_state.setText("Please wait"); for (Member actMember : members) { String gender = ""; if (actMember.getGender() == MemberView.MEMBER_MALE) { gender = "männlich"; } else { gender = "weiblich"; } _dtm.addRow(new Object[]{ actMember.getNname(), actMember.getVname(), actMember.getCity(), actMember.getStreet(), actMember.getPlz(), actMember.getMail(), actMember.getPhonenumber(), actMember.getBirthdayString(), actMember.getStartDateString(), gender, actMember.getBankname(), actMember.getAccountnumber(), actMember.getBanknumber(), actMember.getGroup().toString(), (actMember.hasAccess() ? "JA" : "NEIN"), actMember.getWriteDateString(), (actMember.hasDrinkAbo() ? "JA" : "NEIN") }); } } catch (Exception ex) { System.err.println(ex.getMessage()); } tbl_results.setModel(_dtm); } private void loadTableModel() { _dtm = new DefaultTableModel(new Object[]{"Nachname", "Vorname", "Ort", "Straße", "PLZ", "E-Mail", "Telefon", "Geburtsdatum", "Beitrittsdatum", "Geschlecht", "Bankname", "Kontonummer", "Bankleitzahl", "Gruppe", "hat Zugriff", "Einschreibdatum", "Getränkeabo"}, 0); tbl_results.setAutoResizeMode(JTable.AUTO_RESIZE_ALL_COLUMNS); }

    Read the article

  • Append json data to html class name

    - by user2898514
    I have a problem with my json code. I want to append each json value comes from a key to be appended to an html class name which matches the key of json data. here's my Live demo if you see the result in the life demo. it's only appending the last record. is it possible to make it show all records in order? json var json = '[{"castle":"big","commercial":"large","common":"sergio","cultural":"2009"},' + '{"castle":"big2","commercial":"large2","common":"sergio2","cultural":"20092"}]'; html <div class="castle"></div> <div class="commercial"></div> <div class="common"></div> <div class="cultural"></div> javascript var data = $.parseJSON(json); $.each(data, function(l,v) { $.each(v, function(k,o) { $('.'+k).attr('id', k+o); console.log($('#'+k+o).attr('id')); $('#'+k+o).text(o); }); }); for more illustration... I want the result in the live demo to look like this big large sergio 2009, big2 large2 sergio2 20092

    Read the article

  • IE8 removes background-color of header row of Asp:Gridview

    - by Hitesh Riziya
    I am using Asp.net 4.0 GridView control to display data from database. I have applied the inbuilt theme to GridView. <asp:GridView ID="gv" runat="server" CellPadding="4" EmptyDataText="No records found." ForeColor="#333333" OnRowCommand="gv_RowCommand" Width="99%" OnPageIndexChanging="gv_PageIndexChanged" PageSize="50" AllowPaging="True" GridLines="None" AutoGenerateColumns="true"> <AlternatingRowStyle BackColor="White" /> <EditRowStyle BackColor="#7C6F57" /> <FooterStyle BackColor="#1C5E55" Font-Bold="True" ForeColor="White" /> <HeaderStyle CssClass="GridHeader" BackColor="#1C5E55" Font-Bold="True" ForeColor="White" HorizontalAlign="Left" /> <PagerStyle BackColor="#666666" ForeColor="White" HorizontalAlign="Center" /> <RowStyle BackColor="#E3EAEB" /> <SelectedRowStyle BackColor="#C5BBAF" Font-Bold="True" ForeColor="#333333" /> <SortedAscendingCellStyle BackColor="#F8FAFA" /> <SortedAscendingHeaderStyle BackColor="#246B61" /> <SortedDescendingCellStyle BackColor="#D4DFE1" /> <SortedDescendingHeaderStyle BackColor="#15524A" /></asp:GridView> I tried setting the CSS forcefully in my Master page .GridHeader { background-color:#1C5E55 !important;} But I am still missing the background-color. I can see the backgroundcolor applied to grid (for less-than 1 sec) while the page loading the js/css content NOTE: I already tried clearing cache of IE, ctrl + F5, shift + reload etc. Here is sample page of my issue. http://vd2.weenggs.com/Items.aspx email: [email protected] pass: test Thanks

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • Social Media Java Design Problem

    - by jboyd
    I need to put something together quickly that will take blog posts and place them on social media sites, the requirements are as follows: Blog Entries are independent records that already exist, they have a published date and a modified date, the blog entry application cannot be changed, at least not substantially A new blog entry, or update needs to be sent to social media sites I currently do not need to update or delete social media communications if the blog entry is edited, or deleted, though I may need to later My design problems here are as follows: how do I know the status of each update how can I figure out what blog entry updates and postings have already been sent out? how can I quickly poll the blog entry table for postings that haven't yet been sent out? Avoiding looking at each Entry record from the DB as an object and asking if it's been sent already. That would be too slow. I cannot hook into any Blog Entry update code, my only option would be to create a trigger that an update queues something to be processed I'm looking for general guiding principles here, the biggest problem I'm having is coming up with any reasonable way to figure out if a blog entry should be sent to our social media sites in the first place

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • Kohana 3.2 - Database Session losing data on new Page Request

    - by reado
    I've setup my dev Kohana server to use an encrypted database as the default Session type. I'm also using this in combination with Auth to implement user authentication. Right now my user's are able to authenticate correctly and the authentication keys are being stored in the session. I'm also storing additional data like the user's firstname and businessname during the login procedure. When my login function is ready to redirect the user to the user dashboard, I'm able to see all the data correctly when I do $session::instance()->as_array(); (Array ( [auth_user] => NRyk6lA8 [businessname] => Dudetown [firstname] => Matt )) As soon as I redirect the user to another page, $session::instance()->as_array(); is empty. By dumping out the Session::instance() object, I can see that the Session id's are still the same. When I look at my database table though, i dont see any session records being saved and my session table is empty. My bootstrap.php contains: Session::$default = 'database'; Cookie::$salt = 'asdfasdf'; Cookie::$expiration = 1209600; Cookie::$domain = FALSE; and my session.php config file looks like: return array( 'database' => array( 'name' => 'auth_user', 'encrypted' => TRUE, 'lifetime' => 24 * 3600, 'group' => 'default', 'table' => 'sessions', 'columns' => array( 'session_id' => 'session_id', 'last_active' => 'last_active', 'contents' => 'contents' ), 'gc' => 500, ), ); I've looked high and low for an answer.. if anyone has any suggestions, i'm all ears! Thanks!

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >