Search Results

Search found 657 results on 27 pages for 'ranges'.

Page 20/27 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Problem reintegrating a branch into the trunk in Subversion 1.5

    - by pako
    I'm trying to reintegrate a development branch into the trunk in my Subversion 1.5 repository. I merged all the changes from the trunk to the development branch prior to this operation. Now when I try to reintegrate the changes from the branch I get the following error message: Command: Reintegrate merge https://dev/svn/branches/devel into C:\trunk Error: Reintegrate can only be used if revisions 280 through 325 were previously Error: merged from https://dev/svn/trunk to the reintegrate Error: source, but this is not the case: Error: branches/devel/images/test Error: Missing ranges: /trunk/images/test:280-324 ... The message then goes on complaining about some folders in my project. But when I try to merge the changes from the trunk to the development branch again, TortoiseSVN tells me that there's nothing to merge (as I already merged all the changes before): Command: Merging revisions 1-HEAD of https://dev/svn/trunk into C:\devel, respecting ancestry Completed: C:\devel I'm trying to follow the instructions from here: http://svnbook.red-bean.com/en/1.5/svn.branchmerge.basicmerging.html, but there's nothing about solving such a problem. Any ideas? Perhaps I should just delete the trunk and then make a copy of my branch? But I'm not really sure if it's safe.

    Read the article

  • Composite key syntax in Boost MultiIndex

    - by Sarah
    Even after studying the examples, I'm having trouble figuring out how to extract ranges using a composite key on a MultiIndex container. typedef multi_index_container< boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> >, // ID index ordered_non_unique< const_mem_fun<Host,int,&Host::getAgeInY> >, // Age index ordered_non_unique< const_mem_fun<Host,int,&Host::getHousehold> >, // Household index ordered_non_unique< // Age & eligibility status index composite_key< Host, const_mem_fun<Host,int,&Host::getAgeInY>, const_mem_fun<Host,bool,&Host::isPaired> > > > // end indexed_by > HostContainer; My goal is to get an iterator pointing to the first of the subset of elements in HostContainer hmap that has age partnerAge and returns false to Host::isPaired(): std::pair< hmap::iterator,hmap::iterator > pit = hmap.equal_range(boost::make_tuple( partnerAge, false ) ); I think this is very wrong. How/Where do I specify the iterator index (which should be 3 for age & eligibility)? I will include other composite keys in the future. What exactly are the two iterators in std::pair? (I'm copying syntax from an example that I don't understand.) I would ideally use std::count to calculate the number of elements of age partnerAge that are eligible (return false to Host::isPaired()). What is the syntax for extracting the sorted index that meets these requirements? I'm obviously still learning C++ syntax. Thanks in advance for any help.

    Read the article

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • how to handle an asymptote/discontinuity with Matplotlib

    - by Geddes
    Hello all. Firstly - thanks again for all your help. Sorry not to have accepted the responses to my previous questions as I did not know how the system worked (thanks to Mark for pointing that out!). I have since been back and gratefully acknowledged the kind help I have received. My question: when plotting a graph with a discontinuity/asymptote/singularity/whatever, is there any automatic way to prevent Matplotlib from 'joining the dots' across the 'break'? (please see code/image below). I read that Sage has a [detect_poles] facility that looked good, but I really want it to work with Matplotlib. Thanks and best wishes, Geddes import matplotlib.pyplot as plt import numpy as np from sympy import sympify, lambdify from sympy.abc import x fig = plt.figure(1) ax = fig.add_subplot(111) # set up axis ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') # setup x and y ranges and precision xx = np.arange(-0.5,5.5,0.01) # draw my curve myfunction=sympify(1/(x-2)) mylambdifiedfunction=lambdify(x,myfunction,'numpy') ax.plot(xx, mylambdifiedfunction(xx),zorder=100,linewidth=3,color='red') #set bounds ax.set_xbound(-1,6) ax.set_ybound(-4,4) plt.show()

    Read the article

  • Calculating odds distribution with 6-sided dice

    - by Stephen
    I'm trying to calculate the odds distribution of a changing number of 6-sided die rolls. For example, 3d6 ranges from 3 to 18 as follows: 3:1, 4:3, 5:6, 6:10, 7:15, 8:21, 9:25, 10:27, 11:27, 12:25, 13:21, 14:15, 15:10, 16:6, 17:3, 18:1 I wrote this php program to calculate it: function distributionCalc($numberDice,$sides=6) { for ( $i=0; $i<pow($sides,$numberDice); $i++) { $sum=0; for ($j=0; $j<$numberDice; $j++) { $sum+=(1+(floor($i/pow($sides,$j))) % $sides); } $distribution[$sum]++; } return $distribution; } The inner $j for-loop uses the magic of the floor and modulus functions to create a base-6 counting sequence with the number of digits being the number of dice, so 3d6 would count as: 111,112,113,114,115,116,121,122,123,124,125,126,131,etc. The function takes the sum of each, so it would read as: 3,4,5,6,7,8,4,5,6,7,8,9,5,etc. It plows through all 3^6 possible results and adds 1 to the corresponding slot in the $distribution array between 3 and 18. Pretty straightforward. However, it only works until about 8d6, afterward i get server time-outs because it's now doing billions of calculations. But I don't think it's necessary because die probability follows a sweet bell-curve distribution. I'm wondering if there's a way to skip the number crunching and go straight to the curve itself. Is there a way to do this, so, for example, with 80d6 (80-480)? Can the distribution be projected without doing 6^80 calculations? I'm not a professional coder and probability is still new to me, so thanks for all the help! Stephen

    Read the article

  • Sybase: how can I remove non-printable characters from CHAR or VARCHAR fields with SQL?

    - by Kenny Drobnack
    I'm working with a Sybase database that seems to have non-printable characters in some of the string fields and this is throwing off some of our processing code. At first glance, it seemed to only be newlines and carriage returns, but we also have an ASCII code 27 in there - an ESC character, some accented characters, and some other oddities in there. I have no direct access to change the database, so changing the bad data isn't an option, yet. For now I have to make do with just filtering it out. We're trying to export the table data from one database and load it into a database used by another application in a nightly batch process. Ideally, I'd like to have a function that I can pass a list of characters and just have Sybase return the data with those characters removed. I'd like to keep it something we could do in plain SQL if possible. Something like this to remove characters that are ASCII 0 - 31. select str_replace(FIELD1, (0-31), NULL) as FIELD1, str_replace(FIELD2, (0-31), NULL) as FIELD2 from TABLE So far, str_replace is the nearest I can find, but it only allows replacing one string with another. No support for character ranges and won't let me do the above. We're running on Sybase ASE 12.5 on Unix servers.

    Read the article

  • Daylight Savings Handling in DateDiff() in MS Access?

    - by PowerUser
    I am fully aware of DateDiff()'s inability to handle daylight savings issues. Since I often use it to compare the number of hours or days between 2 datetimes several months apart, I need to write up a solution to handle DST. This is what I came up with, a function that first subtracts 60 minutes from a datetime value if it falls within the date ranges specified in a local table (LU_DST). Thus, the usage would be: datediff("n",Conv_DST_to_Local([date1]),Conv_DST_to_Local([date2])) My question is: Is there a better way to handle this? I'm going to make a wild guess that I'm not the first person with this question. This seems like the kind of thing that should have been added to one of the core reference libraries. Is there a way for me to access my system clock to ask it if DST was in effect at a certain date & time? Function Conv_DST_to_Local(X As Date) As Date Dim rst As DAO.Recordset Set rst = CurrentDb.OpenRecordset("LU_DST") Conv_DST_to_Local = X While rst.EOF = False If X > rst.Fields(0) And X < rst.Fields(1) Then Conv_DST_to_Local = DateAdd("n", -60, X) rst.MoveNext Wend End Function Notes I have visited and imported the BAS file of http://www.cpearson.com/excel/TimeZoneAndDaylightTime.aspx. I spent at least an hour by now reading through it and, while it may do its job well, I can't figure out how to modify it to my needs. But if you have an answer using his data structures, I'll take a look. Timezones are not an issue since this is all local time.

    Read the article

  • How to implement Excel Solver functionality in C#?

    - by Vic
    Hi, I have an application in C#, I need to do some optimization calculations, like Excel Solver Add-in does, one option is certainly to write my own solver implementation, but I'm kind of short of time, so I'm looking into libraries that already exist that can help me with this. I've been trying the Microsoft Solver Foundation, which seems pretty neat and cool, the problem is that it doesn't seem to work with the kind of calculations that I need to do. At the end of this question I'm adding the information about the calculations I need to perform and optimize. So basically my question is if any of you know of any other library that I can use for this purpose, or any tutorial that can help to do my own solver, or any idea that gives me a lead to solve this issue. Thanks. Additional Info: This is the data I need to calculate: I have 7 variables, lets call them var1, var2,...,var7 The constraints for these variables are: All of them need to be 0 <= varn <= 0.5 (where n is the number of the variable) The sum of all the variables should be equal to 1 The objective is to maximize the target formula, which in Excel looks like this: (MMULT(TRANSPOSE(L26:L32),M14:M20)) / (SQRT(MMULT(MMULT(TRANSPOSE(L26:L32),M4:S10),L26:L32))) The range that you see in this formula, L26:L32, is actually the range with the variables from above, var1, var2,..., varn. M14:M20 and M4:S10 are ranges with data that I get from different sources, there are more likely decimal values. As I said before, I was using Microsoft Solver Foundation, I modeled pretty much everything with it, I created functions that handle the operations of the target formula, but when I tried to solve the model it always fail, I think it is because of the complexity of the operations. In any case, I just wanted to show these data so you can have an idea about the kind of calculations that I need to implement.

    Read the article

  • Sorting by dates (including nil) with NSFetchedResultsController

    - by glorifiedHacker
    In my NSFetchedResultsController, I set a sortDescriptor that sorts based on the date property of my managed objects. The problem that I have encountered (along with several others according to Google) is that nil values are sorted at the earliest end rather than the latest end of the date spectrum. I want my list to be sorted earliest, earlier, now, later, latest, nil. As I understand it, this sorting is done at the database level in SQLite and so I cannot construct my own compare: method to provide the sorting I want. I don't want to manually sort in memory, because I would have to give up all of the benefits of NSFetchedResultsController. I can't do compound sorting because the sectionNameKeyPaths are tightly coupled to the date ranges. I could write a routine that redirects indexPath requests so that section 0 in the results controller gets mapped to the last section of the tableView, but I fear that would add a lot of overhead, severely increase the complexity of my code, and be very, very error-prone. The latest idea that I am considering is to map all nil dates to the furthest future date that NSDate supports. My left brain hates this idea, as it feels more like a hack. It will also take a bit of work to implement, since checking for nil factors heavily into how I process dates in my app. I don't want to go this route without first checking for better options. Can anyone think of a better way to get around this problem?

    Read the article

  • Inserting an image into sqlserver gives an "operand type clash"

    - by Termedi
    I'm trying to save an image in a sql server 2000 database. The data type of the column is image. Here is the code: Image Upload: <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> Image Show: <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> It gives the following error: Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed What do I need to do differently?

    Read the article

  • Flex special characters not embedding

    - by Hanpan
    Hi, I am using the following code to embed Arial into my application: [Embed(source='../assets/fonts/Arial.ttf',fontFamily='CustomFont',fontWeight='regular', unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT:Class; [Embed(source='../assets/fonts/Arial Bold.ttf',fontFamily='CustomFont',fontWeight='bold', unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_BOLD:Class; [Embed(source='../assets/fonts/Arial Italic.ttf',fontFamily='CustomFont',fontWeight='regular',fontStyle="italic", unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_ITALIC:Class; [Embed(source='../assets/fonts/Arial Bold Italic.ttf',fontFamily='CustomFont',fontWeight='bold',fontStyle="italic", unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_ITALIC_BOLD:Class; [Embed(source='../assets/fonts/Arial Unicode.ttf',fontFamily='CustomFont',fontWeight='regular', unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_UNICODE:Class; It's working fine for foreign characters, but no special characters (copyright, trademark, euro sign etc) are working. Can anyone help? I've checked my unicode ranges, they should work fine!

    Read the article

  • Having trouble animating Line in D3.js using and array of objects as data

    - by user1731245
    I can't seem to get an animated transition between line graphs when I pass in a new set of data. I am using an array of objects as data like this: [{ clicks: 40 installs: 10 time: "1349474400000" },{ clicks: 61 installs: 3 time: "1349478000000" }]; I am using this code to setup my ranges / axis's var xRange = d3.time.scale().range([0, w]), yRange = d3.scale.linear().range([h , 0]), xAxis = d3.svg.axis().scale(xRange).tickSize(-h).ticks(6).tickSubdivide(false), yAxis = d3.svg.axis().scale(yRange).ticks(5).tickSize(-w).orient("left"); var clicksLine = d3.svg.line() .interpolate("cardinal") .x(function(d){return xRange(d.time)}) .y(function(d){return yRange(d.clicks)}); var clickPath; function drawGraphs(data) { clickPath = svg.append("g") .append("path") .data([data]) .attr("class", "clicks") .attr("d", clicksLine); } function updateGraphs(data) { svg.select('path.clicks') .data([data]) .attr("d", clicksLine) .transition() .duration(500) .ease("linear") } I have tried just about everything to be able to pass in new data and see an animation between graph's. Not sure what I am missing? does it have something to do with using an array of objects instead of just a flat array of numbers as data?

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • How do I create JavaScript escape sequences in PHP?

    - by ordinarytoucan
    I'm looking for a way to create valid UTF-16 JavaScript escape sequence characters (including surrogate pairs) from within PHP. I'm using the code below to get the UTF-32 code points (from a UTF-8 encoded character). This works as JavaScript escape characters (eg. '\u00E1' for 'á') - until you get into the upper ranges where you get surrogate pairs (eg '??' comes out as '\u1D715' but should be '\uD835\uDF15')... function toOrdinal($chr) { if (ord($chr{0}) >= 0 && ord($chr{0}) <= 127) { return ord($chr{0}); } elseif (ord($chr{0}) >= 192 && ord($chr{0}) <= 223) { return (ord($chr{0}) - 192) * 64 + (ord($chr{1}) - 128); } elseif (ord($chr{0}) >= 224 && ord($chr{0}) <= 239) { return (ord($chr{0}) - 224) * 4096 + (ord($chr{1}) - 128) * 64 + (ord($chr{2}) - 128); } elseif (ord($chr{0}) >= 240 && ord($chr{0}) <= 247) { return (ord($chr{0}) - 240) * 262144 + (ord($chr{1}) - 128) * 4096 + (ord($chr{2}) - 128) * 64 + (ord($chr{3}) - 128); } elseif (ord($chr{0}) >= 248 && ord($chr{0}) <= 251) { return (ord($chr{0}) - 248) * 16777216 + (ord($chr{1}) - 128) * 262144 + (ord($chr{2}) - 128) * 4096 + (ord($chr{3}) - 128) * 64 + (ord($chr{4}) - 128); } elseif (ord($chr{0}) >= 252 && ord($chr{0}) <= 253) { return (ord($chr{0}) - 252) * 1073741824 + (ord($chr{1}) - 128) * 16777216 + (ord($chr{2}) - 128) * 262144 + (ord($chr{3}) - 128) * 4096 + (ord($chr{4}) - 128) * 64 + (ord($chr{5}) - 128); } } How do I adapt this code to give me proper UTF-16 code points? Thanks!

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • IIS6 compressing static files, but not JSON requests

    - by user500038
    I have IIS6 compression setup, and static content is being compressed correctly as per Coding Horror. Example of a static file: Response Headers Content-Length 55513 Content-Type application/x-javascript Content-Encoding gzip Last-Modified Mon, 20 Dec 2010 15:31:58 GMT Accept-Ranges bytes Vary Accept-Encoding Server Microsoft-IIS/6.0 X-Powered-By ASP.NET Date Wed, 29 Dec 2010 16:37:23 GMT Request Headers Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive As you can see, the response is correctly compressed. However with a JSON call you'll see the request with the gzip parameter correctly set: Accept application/json, text/javascript, */*; q=0.01 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive X-Requested-With XMLHttpRequest Then for the response: Cache-Control private Content-Length 6811 Content-Type application/json; charset=utf-8 Server Microsoft-IIS/6.0 X-Powered-By ASP.NET X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 No gzip! I found this article by Rick Stahl, which outlines how to compress in your code, but I'd like IIS to handle this. Is this possible with IIS6?

    Read the article

  • UISlider how to set the initial value

    - by slim
    I'm pretty new at this iphone dev stuff and i'm working on some existing code at a company. The uislider i'm trying to set the initial value on is actually in a UITableViewCell and is a custom control. I was thinking in the cell init cell = (QuantitiesCell *)[self loadCellWithNibName:@"QuantitiesCell" forTableView:ltableView withStyle:UITableViewCellStyleDefault]; i could just call something like ((QuantitiesCell *)cell).value = 5; The actual QuantitiesCell class has the member value and the following functions -(void)initCell;{ if (listOfValues == nil) { [self initListOfValues]; } quantitiesSLider.maximumValue = [listOfValues count]-1; quantitiesSLider.minimumValue = 0; quantitiesSLider.value = self.value; } -(void)initListOfValues;{ listOfValues = [[NSMutableArray alloc] initWithCapacity:10]; int j =0; for (float i = minValue; i <= maxValue+increment; i=i+increment) { [listOfValues addObject:[NSNumber numberWithFloat: i]]; if (i == value) { quantitiesSLider.value = j; } j++; } } like i said, i'm pretty new at this so if i need to post more of the code to show whats going to get help, let me know, This slider always is defaulting to 1, the slider ranges usually from 1-10, and i want to set it to a specific value of the item i'm trying to edit.

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Querying datetime.datetime on appengine acts different then dev server help!

    - by Alon Carmel
    Hey, I'm having some trouble with stuff that work locally and dont work on the app engine python environment: Basically, i want to get a program from an epg between ranges of date and time. i know i cannot do two where < so i saw a suggestion to save the dates as list as datetime.datetime which i did. [datetime.datetime(2010, 5, 10, 14, 25), datetime.datetime(2010, 5, 10, 15, 0)] This is ok. but when i try to compare to it: progranon = get_object(Programs2Channel, 'channel_id =', channelobj.key(), 'endstartdate >', programstart_minex, 'endstartdate <', programstart_minex ) This for some reason works locally, but fails to retrieve the data on the app engine. *Im using Google app engine django patch which uses the get_object to retrieve data in transactions. Please help. Here are more details: this is the LIST: [datetime.datetime(2010, 5, 13, 10, 45), datetime.datetime(2010, 5, 13, 11, 30)] #this is the query: programstart = ""+year+"-"+month+"-"+day+" "+hour+":"+minute programstart_minex = datetime.strptime(programstart, "%Y-%m-%d %H:%M") progranon = Programs2Channel.gql('WHERE channel_id = :channelid AND endstartdate > :programstartx AND endstartdate < :programstartx',channelid = channelobj.key(),programstartx=programstart_minex).get()

    Read the article

  • Image insert problem php sqlserver

    - by Termedi
    Hi I can not save image inside a sql server 2000 database. datatype is image here is the code: // Image Upload <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> // Image Show <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed

    Read the article

  • Most efficient way to update attribute of one instance

    - by Begbie00
    Hi all - I'm creating an arbitrary number of instances (using for loops and ranges). At some event in the future, I need to change an attribute for only one of the instances. What's the best way to do this? Right now, I'm doing the following: 1) Manage the instances in a list. 2) Iterate through the list to find a key value. 3) Once I find the right object within the list (i.e. key value = value I'm looking for), change whatever attribute I need to change. for Instance within ListofInstances: if Instance.KeyValue == SearchValue: Instance.AttributeToChange = 10 This feels really inefficient: I'm basically iterating over the entire list of instances, even through I only need to change an attribute in one of them. Should I be storing the Instance references in a structure more suitable for random access (e.g. dictionary with KeyValue as the dictionary key?) Is a dictionary any more efficient in this case? Should I be using something else? Thanks, Mike

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >