Search Results

Search found 4426 results on 178 pages for 'bunch'.

Page 149/178 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • create a class attribute without going through __setattr__

    - by eric.frederich
    Hello, What I have below is a class I made to easily store a bunch of data as attributes. They wind up getting stored in a dictionary. I override __getattr__ and __setattr__ to store and retrieve the values back in different types of units. When I started overriding __setattr__ I was having trouble creating that initial dicionary in the 2nd line of __init__ like so... super(MyDataFile, self).__setattr__('_data', {}) My question... Is there an easier way to create a class level attribute with going through __setattr__? Also, should I be concerned about keeping a separate dictionary or should I just store everything in self.__dict__? #!/usr/bin/env python from unitconverter import convert import re special_attribute_re = re.compile(r'(.+)__(.+)') class MyDataFile(object): def __init__(self, *args, **kwargs): super(MyDataFile, self).__init__(*args, **kwargs) super(MyDataFile, self).__setattr__('_data', {}) # # For attribute type access # def __setattr__(self, name, value): self._data[name] = value def __getattr__(self, name): if name in self._data: return self._data[name] match = special_attribute_re.match(name) if match: varname, units = match.groups() if varname in self._data: return self.getvaras(varname, units) raise AttributeError # # other methods # def getvaras(self, name, units): from_val, from_units = self._data[name] if from_units == units: return from_val return convert(from_val, from_units, units), units def __str__(self): return str(self._data) d = MyDataFile() print d # set like a dictionary or an attribute d.XYZ = 12.34, 'in' d.ABC = 76.54, 'ft' # get it back like a dictionary or an attribute print d.XYZ print d.ABC # get conversions using getvaras or using a specially formed attribute print d.getvaras('ABC', 'cm') print d.XYZ__mm

    Read the article

  • MVC2 Json request not actually hitting the controller

    - by SlackerCoder
    I have a JSON request, but it seems that it is not hitting the controller. Here's the jQuery code: $("#ddlAdminLogsSelectLog").change(function() { globalLogSelection = $("#ddlAdminLogsSelectLog").val(); alert(globalLogSelection); $.getJSON("/Administrative/AdminLogsChangeLogSelection", { NewSelection: globalLogSelection }, function(data) { if (data.Message == "Success") { globalCurrentPage = 1; } else if (data.Message == "Error") { //Do Something } }); }); The alert is there to show me if it actually fired the change event, which it does. Heres the method in the controller: public ActionResult AdminLogsChangeLogSelection(String NewSelection) { String sMessage = String.Empty; StringBuilder sbDataReturn = new StringBuilder(); try { if (NewSelection.Equals("Application Log")) { int i = 0; } else if (NewSelection.Equals("Email Log")) { int l = 0; } } catch (Exception e) { //Do Something sMessage = "Error"; } return Json(new { Message = sMessage, DataReturn = sbDataReturn.ToString() }, JsonRequestBehavior.AllowGet); } I have a bunch of Json requests in my application, and it seems to only happen in this area. This is a separate area (I have 6 "areas" in the app, 5 of which work fine with JSON requests). This controller is named "AdministrativeController", if that matters. Does anything jump out anyone as being incorrect or why the request would not pass to the server side?

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Get Mail with PHP and IMAP in Gmail just loading

    - by Oscar Godson
    I'm not sure why. Ive tried a bunch of different code. Wrote it myself, and copied other people's tutorials but every bit of code it's loading forever and eventually stops due to script processing times on the server. Does anyone know why? Oh, and IMAP is turned on, i get IMAP / Exchange on my iPhone from this same account fine. And IMAP is turned on in my version of PHP (checked with phpinfo they all say enabled.) <?php /* connect to gmail */ $hostname = '{imap.gmail.com:993/imap/ssl}INBOX'; $username = '[email protected]'; $password = 'xxxxxx'; /* try to connect */ $inbox = imap_open($hostname,$username,$password) or die('Cannot connect to Gmail: ' . imap_last_error()); /* grab emails */ $emails = imap_search($inbox,'ALL'); /* if emails are returned, cycle through each... */ if($emails) { /* begin output var */ $output = ''; /* put the newest emails on top */ rsort($emails); /* for every email... */ foreach($emails as $email_number) { /* get information specific to this email */ $overview = imap_fetch_overview($inbox,$email_number,0); $message = imap_fetchbody($inbox,$email_number,2); /* output the email header information */ $output.= '<div class="toggler '.($overview[0]->seen ? 'read' : 'unread').'">'; $output.= '<span class="subject">'.$overview[0]->subject.'</span> '; $output.= '<span class="from">'.$overview[0]->from.'</span>'; $output.= '<span class="date">on '.$overview[0]->date.'</span>'; $output.= '</div>'; /* output the email body */ $output.= '<div class="body">'.$message.'</div>'; } echo $output; } /* close the connection */ imap_close($inbox); ?

    Read the article

  • C# comparing two files regex problem.

    - by Mike
    Hi everyone, what I'm trying to do is open a huge list of files (about 40k records, and match them on a line in a file that contains 2 millions records. And if my line from file A matches a line in file B write out that line. File A contains a bunch of files without extensions and file B contains full file paths including extensions. i'm using this but i cant get it to go... string alphaFilePath = (@"C:\Documents and Settings\g\Desktop\Arrp\Find\natst_ready.txt"); List<string> alphaFileContent = new List<string>(); using (FileStream fs = new FileStream(alphaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { alphaFileContent.Add(rdr.ReadLine()); } } string betaFilePath = @"C:\Documents and Settings\g\Desktop\Arryup\Find\eble.txt"; StringBuilder sb = new StringBuilder(); using (FileStream fs = new FileStream(betaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { string betaFileLine = rdr.ReadLine(); string matchup = Regex.Match(alphaFileContent, @"(\\)(\\)(\\)(\\)(\\)(\\)(\\)(\\)(.*)(\.)").Groups[9].Value; if (alphaFileContent.Equals(matchup)) { File.AppendAllText(@"C:\array_tech.txt", betaFileLine); } } } This doesnt work because the alphafilecontent is a single line only and i'm having a hard time figuring out how to get my regex to work on the file that contains all the file paths (Betafilepath) here is a sample of the beta file path. C:\arres_i\Grn\Ora\SEC\DBZ_EX1\Nes\001\DZO-EX00001.txt Here is the line i'm trying to compare from my alpha DZO-EX00001

    Read the article

  • Any suggestions to improve my PDO connection class?

    - by Scarface
    Hey guys I am pretty new to pdo so I basically just put together a simple connection class using information out of the introductory book I was reading but is this connection efficient? If anyone has any informative suggestions, I would really appreciate it. class PDOConnectionFactory{ public $con = null; // swich database? public $dbType = "mysql"; // connection parameters public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } }

    Read the article

  • What *is* an IPM.DistList?

    - by Jeremy
    I'm trying to get the recipient addresses within an IPM.DistList that is stored in a public folder (of type contacts) in Exchange 2003. The typeName of the object (once I get hold of it) is a Message (with a parent object being a Messages collection) and the messageType is "IPM.DistList". I can find all sorts of things about IPM.DistListItems, which you would think an IPM.DistList would contain, but there apparently isn't any documentation on the DistList (that I can find) and DistListItems documentation lists no parent possibilities in MSDN. I'll state it another way in case I've left you confused: We have an Exchange 2003 info store with Public Folders. Within those Public Folders is a [sub]folder (that holds items of type "Contact") that has a bunch of distribution lists (IPM.DistList's) that have contact entries, members of the list essentially. I need to get the addresses of the members of the lists in the Public Folder sub-folder using any VB language, because the company I work for hired me as a VB guy and expects me to write VB solutions, even though I could do it in C++... alas, I digress. VB is the language I'm supposed to figure this out in. (.net, script, vba, vb6, it doesn't matter which one. Yes, I know vb.net is not really related to those that came before, but they don't know that.) Anyone ran into anything like this? Am I just not finding the IPM.DistList documentation but it does actually exist somewhere? This isn't a Message.MAPIOBJECT (iUnknown) problem is it? Thanks.... Jeremy

    Read the article

  • cython setup.py gives .o instead of .dll

    - by alok1974
    Hi, I am a newbie to cython, so pardon me if I am missing something obvious here. I am trying to build c extensions to be used in python for enhanced performance. I have fc.py module with a bunch of function and trying to generate a .dll through cython using dsutils and running on win64: c:\python26\python c:\cythontest\setup.py build_ext --inplace I have the dsutils.cfg in C:\Python26\Lib\distutils. As required the disutils.cfg has the following config settings: [build] compiler = mingw32 My startup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension('fc', [r'C:\cythonTest\fc.pyx'])] setup( name = 'FC Extensions', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) I have latest version mingw for target/host amdwin64 type builds. I have the latest version of cython for python26 for win64. Cython does give me an fc.c without errors, only a few warning for type conversions, which I will handle once I have it right. Further it produces fc.def an fc.o files Instead of giving a .dll. I get no errors. I find on threads that it will create the .so or .dll automatically as required, which is not happening.

    Read the article

  • Regex Replacing only whole matches

    - by Leen Balsters
    I am trying to replace a bunch of strings in files. The strings are stored in a datatable along with the new string value. string contents = File.ReadAllText(file); foreach (DataRow dr in FolderRenames.Rows) { contents = Regex.Replace(contents, dr["find"].ToString(), dr["replace"].ToString()); File.SetAttributes(file, FileAttributes.Normal); File.WriteAllText(file, contents); } The strings look like this _-uUa, -_uU, _-Ha etc. The problem that I am having is when for example this string "_uU" will also overwrite "_-uUa" so the replacement would look like "newvaluea" Is there a way to tell regex to look at the next character after the found string and make sure it is not an alphanumeric character? I hope it is clear what I am trying to do here. Here is some sample data: private function _-0iX(arg1:flash.events.Event):void { if (arg1.type == flash.events.Event.RESIZE) { if (this._-2GU) { this._-yu(this._-2GU); } } return; } The next characters could be ;, (, ), dot, comma, space, :, etc.

    Read the article

  • How do I see if an established socket is stuck on a server that's expecting input?

    - by Parker
    I have a script that scans ports for open proxy servers. Problem is if it encounters a login program (specifically telnet) then it hangs there forever since it doesn't know what to do and eventually the server closes the connection. The simple solution would be to create a bunch of cases. If telnet, do this. If SSH, do that. If something else, blah blah blah. I'd like an umbrella solution since the script is not a high priority for me. The script, as it is now, is available at http://parkrrr.net/socks/scan.phps On a small scale (the page maybe averages 15 hits/day) it's fine but on a larger scale I'd be worried about a lot of open zombie sockets. Swapping the !$strpos doesn't work since servers can return more information than what you requested (headers, ads, etc). Only accepting a fixed number of bytes (as opposed to appending until EOF, which it does now) from the $fgets also does not seem to work. I am sure this is where it gets stuck: while (!feof($fp)) { $data.=fgets($fp,512); } But what can I do? Any other suggestions/warnings would also be welcomed.

    Read the article

  • iBatis not populating object when there are no rows found.

    - by Omnipresent
    I am running a stored procedure that returns 2 cursors and none of them have any data. I have the following mapping xml: <resultMap id="resultMap1" class="HashMap"> <result property="firstName" columnIndex="2"/> </resultMap> <resultMap id="resultMap2" class="com.somePackage.MyBean"> <result property="unitStreetName" column="street_name"/> </resultMap> <parameterMap id="parmmap" class="map"> <parameter property="id" jdbcType="String" javaType="java.lang.String" mode="IN"/> <parameter property="Result0" jdbcType="ORACLECURSOR" javaType="java.sql.ResultSet" mode="OUT" resultMap="resultMap1"/> <parameter property="Result1" jdbcType="ORACLECURSOR" javaType="java.sql.ResultSet" mode="OUT" resultMap="resultMap2"/> </parameterMap> <procedure id="proc" parameterMap="parmmap"> { call my_sp (?,?,?) } </procedure> First result set is being put in a HashMap...second resultSet is being put in a MyBean class. code in my DAO follows: HashMap map = new HashMap() map.put("id", "1234"); getSqlMapClientTemplate().queryForList("mymap.proc", map); HashMap result1 = (HashMap)((List)parmMap.get("Result0")).get(0); MyBean myObject = (MyBean)((List)parmMap.get("Result1")).get(0);//code fails here in the last line above..my code fails. It fails because second cursor has no rows and thats why nothing is put into the list. However, first cursor returns nothing as well but since results are being put into a HashMap the list for first cursor atleast has HashMap object inside it.. Why this difference? is there a way to make iBatis put an object of MyBean inside the list even if there are no rows returned? Or should I be handling this in my DAO...I want to avoid handling it in the DAO because I have whole bunch of DAO's like these.

    Read the article

  • What is the correct high level schema.org microdata itemtype for a retail brand/company homepage?

    - by kpowz
    I'd like to hear which schema.org itemtype others would recommend using or have used in the case of completing a retail brand's company homepage microdata. Take for example TOMS's shoes: Example #1 - Using /Corporation as the high-level itemtype one can include a lot of great /Organization microdata, but nothing about the retail store. <html itemscope='itemscope' itemtype="http://schema.org/Website> <head></head> <body itemscope='itemscope' itemtype="http://schema.org/Corporation> various microdata here probably including Product microdata </body> </html> NOTE: the only schema.org property specific to /Corporation is tickerSymbol & TOMS doesn't have one. Example #2 - This code would work if TOMS started their own channel of physical retail stores & each location had it's own homepage. However, for TOMS's.com, although accurate schematically & more descriptive at the face, this is incorrect microdata markup for TOMS.com, because /ShoeStore derives from /LocalBusiness - which must represent a physical place. <html itemscope='itemscope' itemtype='http://schema.org/Website'> <head></head> <body itemscope='itemscope' itemtype='http://schema.org/ShoeStore'> a whole bunch of jabber here </body> </html> NOTE: Since TOMS is virtual & thus can't be a /Store this means you lose really cool properties like 'currenciesAccepted', 'paymentAccepted' & 'priceRange'. Is this just a 'sit and wait' situation until more schemas are approved for 'virtual places' or is there a validation-passing way to get the best of both worlds?

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • Where is mpx386.6 and start.c in Minix 3.2?

    - by John Bowlinger
    I'm trying to follow along in Operating Systems and Implementation 3rd edition and I'm now at the part in the book where Tanenbaum is discussing bootup and kernel process switching. He keeps referring to these 2 files (mpx386.s, start.c) that are supposedly in a directory called kernel, but I can't seem to find them. In the root directory, when I go to boot/minix/3.2.0/kernel, kernel just seems to be a binary file that is illegible in terminal. There also seems to be a bunch of mod01-mod12 gz binary files as well in the 3.2.0 directory. Am I in the wrong directory, or is there something I need to install and do to read kernel? I would like to follow along with the book to what's on my screen, instead of constantly flipping back and forth. I realize alot of files are completely different from this book published in 2006 and I accept that, but this seems to be a critical juncture of the book and the operating system as a whole. If it's any consolation, I'm running the OS in Virtualbox on a 64-bit Macbook.

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • How to Populate a 'Tree' structure 'Declaratively'

    - by mackenir
    I want to define a 'node' class/struct and then declare a tree of these nodes in code in such a way that the way the code is formatted reflects the tree structure, and there's not 'too much' boiler plate in the way. Note that this isn't a question about data structures, but rather about what features of C++ I could use to arrive at a similar style of declarative code to the example below. Possibly with C++0X this would be easier as it has more capabilities in the area of constructing objects and collections, but I'm using Visual Studio 2008. Example tree node type: struct node { string name; node* children; node(const char* name, node* children); node(const char* name); }; What I want to do: Declare a tree so its structure is reflected in the source code node root = node("foo", [ node("child1"), node("child2", [ node("grand_child1"), node("grand_child2"), node("grand_child3" ]), node("child3") ]); NB: what I don't want to do: Declare a whole bunch of temporary objects/colls and construct the tree 'backwards' node grandkids[] = node[3] { node("grand_child1"), node("grand_child2"), node("grand_child3" }; node kids[] = node[3] { node("child1"), node("child2", grandkids) node("child3") }; node root = node("foo", kids);

    Read the article

  • MVC design for archived data view

    - by Hemant Tank
    Implementation of a standard archive process in ASP.Net MVC. Backend SQL Server 2005 We've an existing web app built in MVC. We've an Entity "Claim" and it has some child entities like ClaimDetails, Files, etc... A pretty standard setup in DB. Each entity has its own table and are linked via FK. Now, we need to have an "Archive" feature in web app which will allow admin to archive a Claim and its child entities. An archived Claim shud become readonly when visited again. Here're some points on which I need your valued opinion - To keep it simple and scalable (for a few million records) for now we plan to simply add a bit field "Archived" to the Claim table in db. And change the behavior accordingly in the web app. We've a 'Manage claim' page which renders a bunch of diff views for Claim and its child entities. Now, for a readonly view we can either use the same views or have a separate set of views. What do you suggest? At controller level, we can identify archived claim and select which view to render. At model level, though it'd be great to be able to use the same model used for Manage Claim - but it might not get us the "text" of some lookup fields. For example, Claim.BrandId is rendered as a dropdown in Manage claim (requires only BrandId) but for readonly view we need 'BrandText'. Any existing ref or architecture level example would be great. Here's my prev SO post but its more about db level changes: Design a process to archive data (SQL Server 2005) Thank you.

    Read the article

  • Stored Procedure: Reducing Table Data

    - by SumGuy
    Hi Guys, A simple question about Stored Procedures. I have one stored procedure collecting a whole bunch of data in a table. I then call this procedure from within another stored procedure. I can copy the data into a new table created in the calling procedure but as far as I can see the tables have to be identical. Is this right? Or is there a way to insert only the data I want? For example.... I have one procedure which returns this: SELECT @batch as Batch, @Count as Qty, pd.Location, cast(pd.GL as decimal(10,3)) as [Length], cast(pd.GW as decimal(10,3)) as Width, cast(pd.GT as decimal(10,3)) as Thickness FROM propertydata pd GROUP BY pd.Location, pd.GL, pd.GW, pd.GT I then call this procedure but only want the following data: DECLARE @BatchTable TABLE ( Batch varchar(50), [Length] decimal(10,3), Width decimal(10,3), Thickness decimal(10,3), ) INSERT @BatchTable (Batch, [Length], Width, Thickness) EXEC dbo.batch_drawings_NEW @batch So in the second command I don't want the Qty and Location values. However the code above keeps returning the error: "Insert Error: Column name or number of supplied values does not match table"

    Read the article

  • Compressing a database to a single file?

    - by Assimilater
    Hi all. In my contact manager program I have been storing information by reading and writing comma delimited files for each individual contact, and storing notes in a file for each note, and I'm wondering how I could go about shrinking them all into one file effectively. I have attempted using data entry tools in the visual studio toolbox and template class, though I have never quite figured out how to use them. What would be especially convenient is if I could store data as data type IOwner (a class I created) as opposed to strings. I'd also need to figure out how to tell the program what to do when a file is opened (I've noticed in the properties how to associate a file type with the program though am not sure how to tell it what to do when it's opened). Edit: How about rephrasing the question: I have a class IContact with various properties some of them being lists of other class objects. I have a public list of IContact. Can I write Contacts as List(Of IContact) to a file as opposed to a bunch of strings? Second part of the question: I have associated .cms files with my program. But if a user opens the file, what code should the program run through in an attempt to deal with the file? This file is going to contain data that the program needs to read, how do I tell it to read a file when the program is opened vicariously because the file was opened? Does this make the question clearer?

    Read the article

  • How to detect whether an EventWaitHandle is waiting?

    - by AngryHacker
    I have a fairly well multi-threaded winforms app that employs the EventWaitHandle in a number of places to synchronize access. So I have code similar to this: List<int> _revTypes; EventWaitHandle _ewh = new EventWaitHandle(false, EventResetMode.ManualReset); void StartBackgroundTask() { _ewh.Reset(); Thread t = new Thread(new ThreadStart(LoadStuff)); t.Start(); } void LoadStuff() { _revTypes = WebServiceCall.GetRevTypes() // ...bunch of other calls fetching data from all over the place // using the same pattern _ewh.Set(); } List<int> RevTypes { get { _ewh.WaitOne(); return _revTypes; } } Then I just call .RevTypes somewehre from the UI and it will return data to me when LoadStuff has finished executing. All this works perfectly correctly, however RevTypes is just one property - there are actually several dozens of these. And one or several of these properties are holding up the UI from loading in a fast manner. Short of placing benchmark code into each property, is there a way to see which property is holding the UI from loading? Is there a way to see whether the EventWaitHandle is forced to actually wait?

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • IE8: weird border around HTML button element

    - by s427
    I have a button element with a custom background (image+color) and no borders except for a 2px border-bottom (and a bunch of other properties --code below) which renders quite differently in Firefox and in IE8. The problem is, this is a work for a company that uses IE8 as their only browser, so it's important that the button renders well in IE8. Here's a visual comparison between the two: My question here is not about the padding difference (I'm looking into that), but about the weird border that is visible on IE8 in addition to the regular border (border-bottom). Can anyone explain to me where it comes from and how to get rid of it? Thanks in advance. Here is the HTML code: <button class="btn" id="c_edit"> <span>Annuler</span> </button> And here is the CSS: .btn { display: inline-block; margin: 0 0 7px 5px; padding: 0; color: #ddd; font-size: 14px; font-family: FrutigerLTStd55Roman, sans-serif; text-decoration: none; border: none; border-bottom: 2px solid #222; background-color: #999; background-image: url('img/btn_bg.gif'); background-position: 0 bottom; background-repeat: repeat-x; cursor: pointer; transition: all .5s ease-out; } .btn span { display: inline-block; margin: 0; padding: 8px 10px 6px 40px; background-color: transparent; background-position: 4px 0; background-repeat: no-repeat; }

    Read the article

  • Expandable paragraphs with HTML and CSS

    - by user3704175
    I was wondering if anyone here would be as so kind as to help me out a bit. I am looking to make expandable paragraphs for my client's website. They would like to keep all of the content from their site, which is pretty massive, and they want a total overhaul of the design. They mainly wan tot keep it for SEO purposes. Anyhow, I thought it would be helpful for the both of use if there is some way to use expandable paragraphs, you know, with a "read more..." link after a certain line of text. I know that there are some JQuery and Java solutions for this, but we really would like to stay away from those options, if at all possible. When would like HTML and CSS, if we can. Here is kind of an example: HEADING HERE Paragraph with a bunch of text. I would like this to appear in a pre-determined line. For example, maybe the start of the paragraph goes on for, let's say, three lines and then we have the [read more...] When the visitor clicks "read more", we would like the rest of the content to just expand to reveal the article in its entirety. I would like for the content to already be on the page, so it just expands. I don't want it to be called in from another file or anything, if that makes sense. Thank you in advance for any and all help. It will be greatly appreciated! Testudo

    Read the article

  • Handle BACK key event in child view

    - by Mick Byrne
    In my app, users can tap on image thumbnails to see a full size version. When the thumbnail is tapped a bunch of new views are created in code (i.e. no XML), appended at the end of the view hierarchy and some scaling and rotating transitions happen, then the full size, high res version of the image is displayed. Tapping on the full size image reverses the transitions and removes the new views from the view hierarchy. I want users to also be able to press the BACK key to reverse the image transitions. However, I can't seem to catch the KeyEvent. This is what I'm trying at the moment: // Set a click listener on the image to reverse everything frameView.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { zoomOut(); // This works fine } }); // Set the focus onto the frame and then set a key listener to catch the back buttons frameView.setFocusable(true); frameView.setFocusableInTouchMode(true); frameView.requestFocus(); frameView.setOnKeyListener(new OnKeyListener() { @Override public boolean onKey(View v, int keyCode, KeyEvent event) { // The code never even gets here !!! if(keyCode == KeyEvent.KEYCODE_BACK && event.getRepeatCount() == 0) { zoomOut(); return true; } return false; } });

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >