Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 233/266 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Surely a foolish error, but I can't see it

    - by dnagirl
    I have a form (greatly simplified): <form action='http://example.com/' method='post' name='event_form'> <input type='text' name='foo' value='3'/> <input type='submit' name='event_submit' value='Edit'/> </form> And I have a class "EventForm" to process the form. The main method, process() looks like this: public function process($submitname=false){ $success=false; if ($submitname && isset($_POST[$submitname])){ //PROBLEM: isset($_POST[$submitname] is always FALSE for some reason if($success=$this->ruler->validate()){//check that dependancies are honoured and data types are correct if($success=$this->map_fields()){//divide input data into Event, Schedule_Element and Temporal_Expression(s) $success=$this->eventholder->save(); } } } else {//get the record from the db based on event_id or schedule_element_id foreach($this->gets as $var){//list of acceptable $_GET keys if(isset($_GET[$var])){ if($success= $this->__set($var,$_GET[$var])) break; } } } $this->action=(empty($this->eventholder->event_id))? ADD : EDIT; return $success; } When the form is submitted, this happens: $form->process('event_submit'); For some reason though, isset($_POST['event_submit']) always evaluates to FALSE. Can anyone see why?

    Read the article

  • Selecting random top 3 listings per shop for a range of active advertising shops

    - by GraGra33
    I’m trying to display a list of shops each with 3 random items from their shop, if they have 3 or more listings, that are actively advertising. I have 3 tables: one for the shops – “Shops”, one for the listings – “Listings” and one that tracks active advertisers – “AdShops”. Using the below statement, the listings returned are random however I’m not getting exactly 3 listings (rows) returned per shop. SELECT AdShops.ID, Shops.url, Shops.image_url, Shops.user_name AS shop_name, Shops.title, L.listing_id AS listing_id, L.title AS listing_title, L.price as price, L.image_url AS listing_image_url, L.url AS listing_url FROM AdShops INNER JOIN Shops ON AdShops.user_id = Shops.user_id INNER JOIN Listings AS L ON Shops.user_id = L.user_id WHERE (Shops.is_vacation = 0 AND Shops.listing_count > 2 AND L.listing_id IN (SELECT TOP 3 L2.listing_id FROM Listings AS L2 WHERE L2.listing_id IN (SELECT TOP 100 PERCENT L3.listing_id FROM Listings AS L3 WHERE (L3.user_id = L.user_id) ) ORDER BY NEWID() ) ) ORDER BY Shops.shop_name I’m stumped. Anyone have any ideas on how to fix it? The ideal solution would be one record per store with the 3 listings (and associated data) were in columns and not rows – is this possible?

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Should I be using Expression Blend to design really dynamic UIs?

    - by Robert Rossney
    My company's product is, at its core, a framework for developing metadata-driven UIs. I don't know how to characterize it less succinctly than that, and hope I won't need to for purposes of this question, but we'll see. I've been trying to come up to speed on WPF, and have been building UI prototypes here and there, and recently I decided to see if I could use Expression Blend to help with the design of these UIs. And I'm pretty mystified at this point. It appears to me as though Expresssion Blend is designed with the expectation that you already know all of the objects that are going to be present in the UI at design time. But our program generates these object dynamically at runtime. For instance, a data row might be presented in a horizontal StackPanel containing alternating TextBlocks (for captions) and TextBoxes (for data fields). The number of these objects depends on metadata about the number of columns in the data row. I can, pretty readily, write code that runs through a metadata record and populates a StackPanel dynamically, setting up the binding of all of the controls to properties in either the data or metadata. (A TextBox's Width might be bound to metadata, while its Text is bound to data.) But I can't even begin to figure out how to do something like this in Expression Blend. I can manually create all these controls, so that I have a set of controls that I can apply styles to and work out the visual design of the app, but it's really a pain to do this. I can write code that goes through my data model and emits XAML for all these controls, I suppose, and then copy and paste it. But I'm going to feel really stupid if it turns out there a way to do this sort of thing in Expression Blend and I've dropped back and punted because I'm too dim to figure out the right way to think of it. Is this enough information for someone to try formulating an answer?

    Read the article

  • Incrementing value by one over a lot of rows

    - by Andy Gee
    Edit: I think the answer to my question lies in the ability to set user defined variables in MySQL through PHP - the answer by Multifarious has pointed me in this direction Currently I have a script to cycle over 10M records, it's very slow and it goes like this: I first get a block of 1000 results in an array similar to this: $matches[] = array('quality_rank'=>46732, 'db_id'=>5532); $matches[] = array('quality_rank'=>12324, 'db_id'=>1234); $matches[] = array('quality_rank'=>45235, 'db_id'=>8345); $matches[] = array('quality_rank'=>75543, 'db_id'=>2562); I then cycle through them one by one and update the record $mult = count($matches)*2; foreach($matches as $m) { $rank++; $score = (($m[quality_rank] + $rank)/($mult))*100; $s = "UPDATE `packages_sorted` SET `price_rank` = '".$rank."', `deal_score` = '".$score."' WHERE `db_id` = '".$m[db_id]."' LIMIT 1"; } It seems like this is a very slow way of doing it but I can't find another way to increment the field price_rank by one each time. Can anyone suggest a better method. Note: Although I wouldn't usually store this kind of value in a database I really do need on this occasion for comparison search queries later on in the project. Any help would be kindly appreciated :)

    Read the article

  • What should I do with an over-bloated select-box/drop-down

    - by Tristan Havelick
    All web developers run into this problem when the amount of data in their project grows, and I have yet to see a definitive, intuitive best practice for solving it. When you start a project, you often create forms with tags to help pick related objects for one-to-many relationships. For instance, I might have a system with Neighbors and each Neighbor belongs to a Neighborhood. In version 1 of the application I create an edit user form that has a drop down for selecting users, that simply lists the 5 possible neighborhoods in my geographically limited application. In the beginning, this works great. So long as I have maybe 100 records or less, my select box will load quickly, and be fairly easy to use. However, lets say my application takes off and goes national. Instead of 5 neighborhoods I have 10,000. Suddenly my little drop-down takes forever to load, and once it loads, its hard to find your neighborhood in the massive alphabetically sorted list. Now, in this particular situation, having hierarchical data, and letting users drill down using several dynamically generated drop downs would probably work okay. However, what is the best solution when the objects/records being selected are not hierarchical in nature? In the past, of done this with a popup with a search box, and a list, but this seems clunky and dated. In today's web 2.0 world, what is a good way to find one object amongst many for ones forms? I've considered using an Ajaxifed search box, but this seems to work best for free text, and falls apart a little when the data to be saved is just a reference to another object or record. Feel free to cite specific libraries with generic solutions to this problem, or simply share what you have done in your projects in a more general way

    Read the article

  • Is there an efficient way in LINQ to use a contains match if and only if there is no exact match?

    - by Peter
    I have an application where I am taking a large number of 'product names' input by a user and retrieving some information about each product. The problem is, the user may input a partial name or even a wrong name, so I want to return the closest matches for further selection. Essentially if product name A exactly matches a record, return that, otherwise return any contains matches. Otherwise return null. I have done this with three separate statements, and I was wondering if there was a more efficient way to do this. I am using LINQ to EF, but I materialize the products to a list first for performance reasons. productNames is a List of product names (input by the user). products is a List of product 'records' var directMatches = (from s in productNames join p in products on s.ToLower() equals p.name.ToLower() into result from r in result.DefaultIfEmpty() select new {Key = s, Product = r}); var containsMatches = (from d in directMatches from p in products where d.Product == null && p.name.ToLower().Contains(d.Key) select new { d.Key, Product = p }); var matches = from d in directMatches join c in containsMatches on d.Key equals c.Key into result from r in result.DefaultIfEmpty() select new {d.Key, Product = d.Product ?? (r != null ? r.Product: null) };

    Read the article

  • Need advanced mod_rewrite assistance

    - by user1647719
    This is driving me bananas. I have tried multiple fixes, and have even found a generator (generateit.net/mod-rewrite/) to create my .htaccess files, but I cannot get this to work. The .htaccess file is being read - if I put garbage characters in, I get a 500 server error. Also, mod rewrite was working in a simpler format, but the client wants even more SEO, so here I am. My intention: Take the url of brinkofdesign.com/portfolio/123/456/some_seo_text.html and send it to brinkofdesign.com/brink_portfolio.php?catid=123&locationid=456. The full contents of my .htaccess file are: Options +FollowSymlinks RewriteEngine on RewriteBase / RewriteRule ^portfolio([^/]*)/([^/]*)/([^/]*)\.html$ /portfolio_brink.php?catid=$1&locationid=$2&page=$3 [NC,L] RewriteRule ^(.*)blog(.+)([0-9+])(.+).html$ blog.php?blogid=$3 [NC,L] RewriteRule ^blog.xml$ blogfeed.php The blog and blogfeed parts work fine. If it matters, I do have an actual portfolio.php page - could this be breaking it? There is no actual /portfolio/ folder - this is just part of the SEO nme. Also, the host is 1and1.com. Real life tests: brinkofdesign.com/portfolio/1/1/birmingham_al_corporate_branding.html loads my actual portfolio.php page - which is supposed to be a landing page for the portfolio, as opposed to the brink_portfolio.php page, which is my detail record page. No variables are passed (echoing out the contents of the $_GET array gives an empty array). Typing in an non SEO link like brinkofdesign.com/brink_portfolio.php?catid=1&locationid=1 does work correctly. Gurus, please help me!

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

  • codeigniter mulitple LIKE db query using associative array- but all from the same column name...?

    - by Inigo
    Hi, I'm trying to query my database using codeigniter's active record class. I have a number of blog posts stored in a table. The query is for a search function, which will pull out all the posts that have certain categories assigned to them. So the 'category' column of the table will have a list of all the categories for that post in no particular order, separated by commas, like so: Politics,History,Sociology.. etc. If a user selects, say, Politics and History, The titles of all the posts that have BOTH these categories should be returned. Right? So, the list of categories queried will be the array $cats. I thought this would work- foreach ($cats as $cat){ $this->db->like('categories',$cat); } By Producing this- $this-db-like('categories','Politics'); $this-db-like('categories','History'); (Which would produce- 'WHERE categories LIKE '%Politics%' AND categories LIKE '%History%') But it doesn't work, it seems to only produce the first statement. The problem I guess is that the column name is the same for each of the chained queries. There doesn't seem to be anything in the CI user guide about this (http://codeigniter.com/user_guide/database/active_record.html) as they seem to assume that each chained statement is going to be for a different column name. Does anyone know how I could do this? Thanks! edit- Of course it is not possible to use an associative array in one statement as it would have to contain duplicate keys- in this case every key would have to be 'categories'...

    Read the article

  • How To Join Tables from Two Different Contexts with LINQ2SQL?

    - by RSolberg
    I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL? Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry. Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream. Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product. This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context? var query = (from c in contextA.Cases where monitoredInboxList.Contains(c.INBOXES.inboxName) select new { //setup fields here... });

    Read the article

  • Database locking: ActiveRecord + Heroku

    - by JP
    I'm building a Sinatra based app for deployment on Heroku. You can imagine it like a standard URL shortener but where old shortcodes expire and become available for new URLs (I realise this is a silly concept but its easier to explain this way). I'm representing the shortcode in my database as an integer and redefining its reader to give a nice short and unique string from the integer. As some rows will be deleted, I've written code that goes thru all the shortcode integers and picks the first free one to use just before_save. Unfortunately I can make my code create two rows with identical shortcode integers if I run two instances very quickly one after another, which is obviously no good! How should I implement a locking system so that I can quickly save my record with a unique shortcode integer? Here's what I have so far: Chars = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a CharLength = Chars.length class Shorts < ActiveRecord::Base before_save :gen_shortcode after_save :done_shortcode def shortcode i = read_attribute(:shortcode).to_i return '0' if i == 0 s = '' while i > 0 s << Chars[i.modulo(CharLength)] i /= 62 end s end private def gen_shortcode shortcode = 0 self.class.find(:all,:order=>"shortcode ASC").each do |s| if s.read_attribute(:shortcode).to_i != shortcode # Begin locking? break end shortcode += 1 end write_attribute(:shortcode,shortcode) end def done_shortcode # End Locking? end end

    Read the article

  • Square Brackets in Python Regular Expressions (re.sub)

    - by user1479984
    I'm migrating wiki pages from the FlexWiki engine to the FOSwiki engine using Python regular expressions to handle the differences between the two engines' markup languages. The FlexWiki markup and the FOSwiki markup, for reference. Most of the conversion works very well, except when I try to convert the renamed links. Both wikis support renamed links in their markup. For example, Flexwiki uses: "Link To Wikipedia":[http://www.wikipedia.org/] FOSwiki uses: [[http://www.wikipedia.org/][Link To Wikipedia]] both of which produce something that looks like I'm using the regular expression renameLink = re.compile ("\"(?P<linkName>[^\"]+)\":\[(?P<linkTarget>[^\[\]]+)\]") to parse out the link elements from the FlexWiki markup, which after running through something like "Link Name":[LinkTarget] is reliably producing groups <linkName> = Link Name <linkTarget = LinkTarget My issue occurs when I try to use re.sub to insert the parsed content into the FOSwiki markup. My experience with regular expressions isn't anything to write home about, but I'm under the impression that, given the groups <linkName> = Link Name <linkTarget = LinkTarget a line like line = renameLink.sub ( "[[\g<linkTarget>][\g<linkName>]]" , line ) should produce [[LinkTarget][Link Name]] However, in the output to the text files I'm getting [[LinkTarget [[Link Name]] which breaks the renamed links. After a little bit of fiddling I managed a workaround, where line = renameLink.sub ( "[[\g<linkTarget>][ [\g<linkName>]]" , line ) produces [[LinkTarget][ [[Link Name]] which, when displayed in FOSwiki looks like <[[Link Name> <--- Which WORKS, but isn't very pretty. I've also tried line = renameLink.sub ( "[[\g<linkTarget>]" + "[\g<linkName>]]" , line ) which is producing [[linkTarget [[linkName]] There are probably thousands of instances of these renamed links in the pages I'm trying to convert, so fixing it by hand isn't any good. For the record I've run the script under Python 2.5.4 and Python 2.7.3, and gotten the same results. Am I missing something really obvious with the syntax? Or is there an easy workaround?

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • Designing a chain of states

    - by devoured elysium
    I want to model a kind of FSM(Finite State Machine). I have a sequence of states (let's say, from StateA to StateZ). This sequence is called a Chain and is implemented internally as a List. I will add states by the order I want them to run. My purpose is to be able to make a sequence of actions in my computer (for example, mouse clicks). (I know this has been done a zillion times). So a state is defined as a: boolean Precondition() <- Checks to see if for this state, some condition is true. For example, if I want to click in the Record button of a program, in this method I would check if the program's process is running or not. If it is, go to the next state in the chain list, otherwise, go to what was defined as the fail state (generally is the first state of them all). IState GetNextState() <- Returns the next state to evaluate. If Precondition() was sucessful, it should yield the next state in the chain otherwise it should yield the fail state. Run() Simply checks the Precondition() and sets the internal data so GetNextState() works as expected. So, a naive approach to this would be something like this: Chain chain = new Chain(); //chain.AddState(new State(Precondition, FailState, NextState) <- Method structure chain.AddState(new State(new WinampIsOpenCondition(), null, new <problem here, I want to referr to a state that still wasn't defined!>); The big problem is that I want to make a reference to a State that at this point still wasn't defined. I could circumvent the problem by using strings when refrering to states and using an internal hashtable, but isn't there a clearer alternative? I could just pass only the pre-condition and failure states in the constructor, having the chain just before execution put in each state the correct next state in a public property but that seems kind of awkward.

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • Best way to sign data in web form with user certificate

    - by salgiza
    We have a C# web app where users will connect using a digital certificate stored in their browsers. From the examples that we have seen, verifying their identity will be easy once we enable SSL, as we can access the fields in the certificate, using Request.ClientCertificate, to check the user's name. We have also been requested, however, to sign the data sent by the user (a few simple fields and a binary file) so that we can prove, without doubt, which user entered each record in our database. Our first thought was creating a small text signature including the fields (and, if possible, the md5 of the file) and encrypt it with the private key of the certificate, but... As far as I know we can't access the private key of the certificate to sign the data, and I don't know if there is any way to sign the fields in the browser, or we have no other option than using a Java applet. And if it's the latter, how we would do it (Is there any open source applet we can use? Would it be better if we create one ourselves?) Of course, it would be better if there was any way to "sign" the fields received in the server, using the data that we can access from the user's certificate. But if not, any info on the best way to solve the problem would be appreciated.

    Read the article

  • Android: How to periodically send location to a server

    - by Mark
    Hi, I am running a Web service that allows users to record their trips (kind of like Google's MyTracks) as part of a larger app. The thing is that it is easy to pass data, including coords and other items, to the server when a user starts a trip or ends it. Being a newbie, I am not sure how to set up a background service that sends the location updates once every (pre-determined) period (min 3 minutes, max 1 hr) until the user flags the end of the trip, or until a preset amount of time elapses. Once the trip is started from the phone, the server responds with a polling period for the phone to use as the interval between updates. This part works, in that I can display the response on the phone, and my server registers the user's action. Similarly, the trip is closed server-side upon the close trip request. However, when I tried starting a periodic tracking method from inside the StartTrack Activity, using requestLocationUpdates(String provider, long minTime, float minDistance, LocationListener listener) where minTime is the poll period from the server, it just did not work, and I'm not getting any errors. So it means I'm clueless at this point, never having used Android before. I have seen many posts here on using background services with handlers, pending intents, and other things to do similar stuff, but I really don't understand how to do it. I would like the user to do other stuff on the phone while the updates are going on, so if you guys could point me to a tutorial that shows how to actually write background services (maybe these run as separate classes?) or other ways of doing this, that would be great. Thanks!

    Read the article

  • waveInProc / Windows audio question...

    - by BTR
    I'm using the Windows API to get audio input. I've followed all the steps on MSDN and managed to record audio to a WAV file. No problem. I'm using multiple buffers and all that. I'd like to do more with the buffers than simply write to a file, so now I've got a callback set up. It works great and I'm getting the data, but I'm not sure what to do with it once I have it. Here's my callback... everything here works: // Media API callback void CALLBACK AudioRecorder::waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { // Data received if (uMsg == WIM_DATA) { // Get wav header LPWAVEHDR mBuffer = (WAVEHDR *)dwParam1; // Now what? for (unsigned i = 0; i != mBuffer->dwBytesRecorded; ++i) { // I can see the char, how do get them into my file and audio buffers? cout << mBuffer->lpData[i] << "\n"; } // Re-use buffer mResultHnd = waveInAddBuffer(hWaveIn, mBuffer, sizeof(mInputBuffer[0])); // mInputBuffer is a const WAVEHDR * } } // waveInOpen cannot use an instance method as its callback, // so we create a static method which calls the instance version void CALLBACK AudioRecorder::staticWaveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { // Call instance version of method reinterpret_cast<AudioRecorder *>(dwParam1)->waveInProc(hWaveIn, uMsg, dwInstance, dwParam1, dwParam2); } Like I said, it works great, but I'm trying to do the following: Convert the data to short and copy into an array Convert the data to float and copy into an array Copy the data to a larger char array which I'll write into a WAV Relay the data to an arbitrary output device I've worked with FMOD a lot and I'm familiar with interleaving and all that. But FMOD dishes everything out as floats. In this case, I'm going the other way. I guess I'm basically just looking for resources on how to go from LPSTR to short, float, and unsigned char. Thanks much in advance!

    Read the article

  • Check the code n point the mistake

    - by Vibha
    here is the code: Ext.onReady(function(){ alert("inside onReady"); Ext.QuickTips.init(); var employee = Ext.data.Record.create([ {name:'firstname'}, {name:'lastname'}]); var myReader = new Ext.data.JsonReader({ root:"EmpInfo", },employee); var store = new Ext.data.JsonStore({ id:'ID' ,root:'EmpInfo' ,totalProperty:'totalCount' ,url:'test.php' ,autoLoad:true ,fields:[ {name:'firstname', type:'string'} ,{name:'lastname', type:'string'} ] }); var myPanel = new Ext.grid.GridPanel({ store: store ,columns:[{ dataIndex:'firstname' ,header:'First Name' ,width:139 },{ dataIndex:'lastname' ,header:'Middle Name' ,width:139 } ] }); var myWindow = new Ext.Window({ width:300, height:300, layout:'fit', closable:false, resizable:false, items:[myPanel] }); myWindow.show(); }); And php code is: true, "data" = array( "firstname" = "ABC" , "lastname" = "MNO") ); $_SESSION["err"] = isset($_SESSION["err"]) ? !$_SESSION["err"] : true; header("Content-Type: application/json"); echo json_encode($o); ? I want to print the values ABC and MNO in the grid panel. i'm using extjs 2.3. please help me out. Thanks

    Read the article

  • Magento products will not show in category

    - by Aaron
    I've recently been tasked with the build and deployment of a large Ecommerce site. In the past we've had to use the clients legacy X-cart installation for redevelopment (too far integrated with their existing work flow). We'd heard good things about Magento, so I've set up a test install to get to grips with it. After a couple of initial issues, there is a live development site which displays categories on the default theme. The problem we've hit now is that products don't display..! After a lot more in-depth research into this, all I've been able to discover is that quite a number of developers endorse using other solutions entirely, with the other 50% saying after the steep learning curve the platform is as wonderful as we'd initially been led to believe. Now, my test category is showing, so I know this is configured properly. I've set up three test products and associated them with this (all done following the Magento user guide), checked double checked and thrice checked the products are enabled and visible individually, yet still the front end says the category has no products in it. I've cleared the cache repeatedly, reset everything possible many times in index management - no products show up. I have to make a call tomorrow morning on whether we're going ahead with Magento. If I can't even get it to show products I'm going to have to go with something with a more established track record and more community support available. Can anybody advise what could possibly be wrong here?

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >