Search Results

Search found 10732 results on 430 pages for 'pi db'.

Page 31/430 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • DB design for master file in enterprise software

    - by Thang Nguyen
    Dear all. I want to write an enterprise software and now I'm in the DB design phase. The software will have some master data such as Suppliers, Customers, Inventories, Bankers... I considering 2 options: Put each of these on one separate table. The advantage: the table will have all necessary information for that kind of master file (Customer: name, address,.../Inventory: Type, Manufacturer, Condition...). Disadvantage: Not flexible. When I want to have a new type of master data, such as Insurer, I have to design another table. Put all in one table and this table have foreign key to another table which have type of each kind of master data (table 1: id, data_type, code, name, address....; table 2: data_type, data_type_name). Advantage: flexible - if I want more master data such as Insurer, I just put in table 2: code: 002, name: Insurer, and then put detail each insurer into table 1). Disadvantage: table 1 must have sufficient field to store all kind of information including: customer name, address, account, inventory's manufacturer, inventory's quality...). So which method do you usually do (or you think work better). Thank you very much

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • MySQL db Audit Trail Trigger

    - by Natkeeran
    I need to track changes (audit trail) in certain tables in a MySql Db. I am trying to implement the solution suggested here. I have an AuditLog Table with the following columns: AuditLogID, TableName, RowPK, FieldName, OldValue, NewValue, TimeStamp. The mysql stored procedure is the following (this executes fine, and creates the procedure): The call to the procedure such as: CALL addLogTrigger('ProductTypes', 'ProductTypeID'); executes, but does not create any triggers (see the image). SHOW TRIGGERS returns empty set. Please let me know what could be the issue, or an alternate way to implement this. DROP PROCEDURE IF EXISTS addLogTrigger; DELIMITER $ CREATE PROCEDURE addLogTrigger(IN tableName VARCHAR(255), IN pkField VARCHAR(255)) BEGIN SELECT CONCAT( 'DELIMITER $\n', 'CREATE TRIGGER ', tableName, '_AU AFTER UPDATE ON ', tableName, ' FOR EACH ROW BEGIN ', GROUP_CONCAT( CONCAT( 'IF NOT( OLD.', column_name, ' <=> NEW.', column_name, ') THEN INSERT INTO AuditLog (', 'TableName, ', 'RowPK, ', 'FieldName, ', 'OldValue, ', 'NewValue' ') VALUES ( ''', table_name, ''', NEW.', pkField, ', ''', column_name, ''', OLD.', column_name, ', NEW.', column_name, '); END IF;' ) SEPARATOR ' ' ), ' END;$' ) FROM information_schema.columns WHERE table_schema = database() AND table_name = tableName; END$ DELIMITER ;

    Read the article

  • PEAR:DB connection parameters

    - by Markus Ossi
    I just finished my first PHP site and now I have a security-related question. I used PEAR:DB for the database connection and made a separate parameter file for it. How should I hide this parameter file? I found a guide (http://www.kitebird.com/articles/peardb.html) that says: Another way to specify connection parameters is to put them in a separate file that you reference from your main script. ... It also enables you to move the parameter file outside of the web server's document tree, which prevents its contents from being displayed literally if the server becomes misconfigured and starts serving PHP scripts as plain text. I have now put my file in a directory like this /include/db_parameters.inc However, if I go to this URL, the web server shows me the contents of the file including my database username and password. From what I've understood, I should protect this file so, that even though PHP would be served as text, nobody could read this. What does outside of web server's document tree mean here? Put the PHP file out of public_html directory altogether deeper into the server file system? Some CHMOD?

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • Fix DB duplicate entries (MySQL bug)

    - by Silence
    I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints. When I try to group rows, MySQL doesn't recognise the rows as being similar. Example: Table A has a column "Name" with the Unique proprety. The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield) A "Group by" on these 2 rows return 2 separate rows This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint. In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever. Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ? I can supply the .MYD file if it can be helpful.

    Read the article

  • Heroku: Postgres type operator error after migrating DB from MySQL

    - by sevennineteen
    This is a follow-up to a question I'd asked earlier which phrased this as more of a programming problem than a database problem. http://stackoverflow.com/questions/2935985/postgres-error-with-sinatra-haml-datamapper-on-heroku I believe the problem has been isolated to the storage of the ID column in Heroku's Postgres database after running db:push. In short, my app runs properly on my original MySQL database, but throws Postgres errors on Heroku when executing any query on the ID column, which seems to have been stored in Postgres as TEXT even though it is stored as INT in MySQL. My question is why the ID column is being created as INT in Postgres on the data transfer to Heroku, and whether there's any way for me to prevent this. Here's the output from a heroku console session which demonstrates the issue: Ruby console for myapp.heroku.com >> Post.first.title => "Welcome to First!" >> Post.first.title.class => String >> Post.first.id => 1 >> Post.first.id.class => Fixnum >> Post[1] PostgresError: ERROR: operator does not exist: text = integer LINE 1: ...", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Query: SELECT "id", "name", "email", "url", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER BY "id" LIMIT 1 Thanks!

    Read the article

  • PHP Form - Edit & Delete via Text File Db

    - by Jax
    hi, I pieced together the script below from various tutorials, examples, etc... Right now the script currently: Saves Id, Name, Url with a "|" delimiter to a text file Db like: 1|John|http://www.john.com| 2|Mark|http://www.mark.com| 3|Fred|http://www.fred.com| But I'm having a hard time trying to make the "UPDATE" and "DELETE" buttons work. Can someone please post code which will: let me update/save any changed data for that row (for UPDATE button) let me delete that row (for DELETE button) PLEASE copy n paste the code below and try for yourself. I would like to keep the output format of the script below too. thanks D- $file = "data.txt"; $name = $_POST['name']; $url = $_POST['url']; $data = file('data.txt'); $i = 1; foreach ($data as $line) { $line = explode('|', $line); $i++; } if (isset($_POST['submits'])) { $fp = fopen($file, "a+"); fwrite($fp, $i."|".$name."|".$url."|\n"); fclose($fp); } ? '); } ?

    Read the article

  • Invoking a SOAP ( Web Services ) from ORACLE DB

    - by Mousarules
    Dears, Kindly note that I’m trying to invoke a SOAP (web services) from ORACLE DB using pl\sql , after I have done some investigations it says that I have to use the UTL_HTTP package but It didn't work with me !!! Kindly to advice me , where should I exactly place the following SOAP in pl\SQL to be invoked .... is it posible ? SOAP 1.1 The following is a sample SOAP 1.1 request and response. The placeholders shown need to be replaced with actual values. POST /gmgwebservice/service.asmx HTTP/1.1 Host: bulk.umniah.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://tempuri.org/SendSMS" <SendSMS xmlns="http://tempuri.org/"> <UserName>string</UserName> <Password>string</Password> <MessageBody>string</MessageBody> <Sender>string</Sender> <Destination>string</Destination> </SendSMS> HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <SendSMSResponse xmlns="http://tempuri.org/"> <SendSMSResult>string</SendSMSResult> </SendSMSResponse> --This web services refers to a web site called Bulk Messaging ; the web site sends SMS to a specific mobile number by filling in some text boxes , I need it to be done from ORACLE forms when a specific action occurs ( JOB ) but I don’t know how to use it inside my pl\sql code . Hope that it’s clear ,is there something else I have to mention ?

    Read the article

  • Cant fetch production db results using Google app engine remote_api

    - by Alon
    Hey, im trying to work out with /remote_api with a django-patch app engine app i got running. i want to select a few rows from my online production app locally. i cant seem to manage todo so, everything authenticates fine, it doesnt breaks on imports, but when i try to fetch something it just doesnt print anything. Placed the test python inside my local app dir. #!/usr/bin/env python # import os import sys # Hardwire in appengine modules to PYTHONPATH # or use wrapper to do it more elegantly appengine_dirs = ['myworkingpath'] sys.path.extend(appengine_dirs) # Add your models to path my_root_dir = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, my_root_dir) from google.appengine.ext import db from google.appengine.ext.remote_api import remote_api_stub import getpass APP_NAME = 'Myappname' os.environ['AUTH_DOMAIN'] = 'gmail.com' os.environ['USER_EMAIL'] = '[email protected]' def auth_func(): return (raw_input('Username:'), getpass.getpass('Password:')) # Use local dev server by passing in as parameter: # servername='localhost:8080' # Otherwise, remote_api assumes you are targeting APP_NAME.appspot.com remote_api_stub.ConfigureRemoteDatastore(APP_NAME, '/remote_api', auth_func) # Do stuff like your code was running on App Engine from channel.models import Channel, Channel2Operator myresults = mymodel.all().fetch(10) for result in myresults: print result.key() it doesnt give any error or print anything. so does the remote_api console example google got. when i print the myresults i get [].

    Read the article

  • Error when feeding a mysql db with a python-parsed data

    - by Barnabe
    I use this bit of code to feed some data i have parsed from a web page to a mysql database c=db.cursor() c.executemany( """INSERT INTO data (SID, Time, Value1, Level1, Value2, Level2, Value3, Level3, Value4, Level4, Value5, Level5, ObsDate) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", clean_data ) The parsed data looks like this (there are several hundred such lines) clean_data = [(161,00:00:00,8.19,1,4.46,4,7.87,4,6.54,null,4.45,6,2010-04-12),(162,00:00:00,7.55,1,9.52,1,1.90,1,4.76,null,0.14,1,2010-04-12),(164,00:00:00,8.01,1,8.09,1,0,null,8.49,null,0.20,2,2010-04-12),(166,00:00:00,8.30,1,4.77,4,10.99,5,9.11,null,0.36,2,2010-04-12)] if i hard code the data as above mySQL accepts my request (except for some quibbles about formatting) but if the variable clean_data is instead defined as the result of the parsing code, like this: cleaner = [(""" $!!'""", ')]'),(' $!!', ') etc etc] def processThis(str,lst): for find, replace in lst: str = str.replace(find, replace) return str clean_data = processThis(data,cleaner) then i get the dreaded "TypeError: not enough arguments for format string" After playing with formatting options for a few hours (I am very new to this) I am confused... what is the difference between the hard coded data and the result of the processThis function as fas as mySQL is concerned? Any idea greatly appreciated...

    Read the article

  • Magento, 1 db field not saved

    - by david parloir
    Hi there, I have a problem with 1 field of the db. With this code: $expireMonth = Mage::getStoreConfig('points_options/config_points/expiration_period', Mage::app()->getStore()->getId()); if (!is_null($expireMonth) && ($expireMonth > 0)) { $expireDate = date("Y-m-d H:i:s", strtotime("+" . $expireMonth . " month")); } else { $expireDate = NULL; } //die($expireDate); //store in points history table $this->_pointsModel->setCustomerId($this->_customer->getId()) ->setOrdersId('welcome') ->setPointsPending($pointsForNewCustomer) ->setPointsComment(Mage::helper('points')->__('welcome points')) ->setDateAdded(date('Y-m-d H:i:s')) ->setPointsStatus(2)//confirmed ->setPointsType('WE') ->setStoreId(Mage::app()->getStore()->getId()) ->setExpireDate($expireDate) ->save(); every field is saved in the table, except for expire_date. If I uncoment the die($expireData), I see the correct value, something like 2012-01-13 13:21:12. The filed is defined as: `expire_date` datetime NULL Any thaughts? edit: the solution is: $expireDate = date("Y-m-d H:i:s", strtotime("+" . $expireMonth . " months")); check out the "s" in my strtotime expression

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • Need help INSERT record(s) MySQL DB

    - by JM4
    I have an online form which collects member(s) information and stores it into a very long MySQL database. We allow up to 16 members to enroll at a single time and originally structured the DB to allow such. For example: If 1 Member enrolls, his personal information (first name, last name, address, phone, email) are stored on a single row. If 15 Members enroll (all at once), their personal information are stored in the same single row. The row has information housing columns for all 'possible' inputs. I am trying to consolidate this code and having every nth member that enrolls put onto a new record within the database. I have seen sugestions before for inserting multiple records as such: INSERT INTO tablename VALUES (('$f1name', '$f1address', '$f1phone'), ('$f2name', '$f2address', '$f2phone')... The issue with this is two fold: I do not know how many records are being enrolled from person to person so the only way to make the statement above is to use a loop The information collected from the forms is NOT a single array so I can't loop through one array and have it parse out. My information is collected as individual input fields like such: Member1FirstName, Member1LastName, Member1Phone, Member2Firstname, Member2LastName, Member2Phone... and so on Is it possible to store information in separate rows WITHOUT using a loop (and therefore having to go back and completely restructure my form field names and such (which can't happen due to the way the validation rules are built.)

    Read the article

  • SSAO Distortion

    - by Robert Xu
    I'm currently (attempting) to add SSAO to my engine, except it's...not really work, to say the least. I use a deferred renderer to render my scene. I have four render targets: Albedo, Light, Normal, and Depth. Here are the parameters for all of them (Surface Format, Depth Format): Albedo: 32-bit ARGB, Depth24Stencil8 Light: 32-bit ARGB, None Normal: 32-bit ARGB, None Depth: 8-bit R (Single), Depth24Stencil8 To generate my random noise map for the SSAO, I do the following for each pixel in the noise map: Vector3 v3 = Vector3.Zero; double z = rand.NextDouble() * 2.0 - 1.0; double r = Math.Sqrt(1.0 - z * z); double angle = rand.NextDouble() * MathHelper.TwoPi; v3.X = (float)(r * Math.Cos(angle)); v3.Y = (float)(r * Math.Sin(angle)); v3.Z = (float)z; v3 += offset; v3 *= 0.5f; result[i] = new Color(v3); This is my GBuffer rendering effect: PixelInput RenderGBufferColorVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = mul(input.Position, WorldViewProjection); pi.Normal = mul(input.Normal, WorldInverseTranspose); pi.Color = input.Color; pi.TPosition = pi.Position; pi.WPosition = input.Position; return pi; } GBufferTarget RenderGBufferColorPixelShader(PixelInput input) { GBufferTarget output = ( GBufferTarget ) 0; float3 position = input.TPosition.xyz / input.TPosition.w; output.Albedo = lerp(float4(1.0f, 1.0f, 1.0f, 1.0f), input.Color, ColorFactor); output.Normal = EncodeNormal(input.Normal); output.Depth = position.z; return output; } And here is the SSAO effect: float4 EncodeNormal(float3 normal) { return float4((normal.xyz * 0.5f) + 0.5f, 0.0f); } float3 DecodeNormal(float4 encoded) { return encoded * 2.0 - 1.0f; } float Intensity; float Size; float2 NoiseOffset; float4x4 ViewProjection; float4x4 ViewProjectionInverse; texture DepthMap; texture NormalMap; texture RandomMap; const float3 samples[16] = { float3(0.01537562, 0.01389096, 0.02276565), float3(-0.0332658, -0.2151698, -0.0660736), float3(-0.06420016, -0.1919067, 0.5329634), float3(-0.05896204, -0.04509097, -0.03611697), float3(-0.1302175, 0.01034653, 0.01543675), float3(0.3168565, -0.182557, -0.01421785), float3(-0.02134448, -0.1056605, 0.00576055), float3(-0.3502164, 0.281433, -0.2245609), float3(-0.00123525, 0.00151868, 0.02614773), float3(0.1814744, 0.05798516, -0.02362876), float3(0.07945167, -0.08302628, 0.4423518), float3(0.321987, -0.05670302, -0.05418307), float3(-0.00165138, -0.00410309, 0.00537362), float3(0.01687791, 0.03189049, -0.04060405), float3(-0.04335613, -0.00530749, 0.06443053), float3(0.8474263, -0.3590308, -0.02318038), }; sampler DepthSampler = sampler_state { Texture = DepthMap; MipFilter = Point; MinFilter = Point; MagFilter = Point; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler NormalSampler = sampler_state { Texture = NormalMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler RandomSampler = sampler_state { Texture = RandomMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; struct VertexInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; struct PixelInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; PixelInput SSAOVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = input.Position; pi.TextureCoordinates = input.TextureCoordinates; return pi; } float3 GetXYZ(float2 uv) { float depth = tex2D(DepthSampler, uv); float2 xy = uv * 2.0f - 1.0f; xy.y *= -1; float4 p = float4(xy, depth, 1); float4 q = mul(p, ViewProjectionInverse); return q.xyz / q.w; } float3 GetNormal(float2 uv) { return DecodeNormal(tex2D(NormalSampler, uv)); } float4 SSAOPixelShader(PixelInput input) : COLOR0 { float depth = tex2D(DepthSampler, input.TextureCoordinates); float3 position = GetXYZ(input.TextureCoordinates); float3 normal = GetNormal(input.TextureCoordinates); float occlusion = 1.0f; float3 reflectionRay = DecodeNormal(tex2D(RandomSampler, input.TextureCoordinates + NoiseOffset)); for (int i = 0; i < 16; i++) { float3 sampleXYZ = position + reflect(samples[i], reflectionRay) * Size; float4 screenXYZW = mul(float4(sampleXYZ, 1.0f), ViewProjection); float3 screenXYZ = screenXYZW.xyz / screenXYZW.w; float2 sampleUV = float2(screenXYZ.x * 0.5f + 0.5f, 1.0f - (screenXYZ.y * 0.5f + 0.5f)); float frontMostDepthAtSample = tex2D(DepthSampler, sampleUV); if (frontMostDepthAtSample < screenXYZ.z) { occlusion -= 1.0f / 16.0f; } } return float4(occlusion * Intensity * float3(1.0, 1.0, 1.0), 1.0); } technique SSAO { pass Pass0 { VertexShader = compile vs_3_0 SSAOVertexShader(); PixelShader = compile ps_3_0 SSAOPixelShader(); } } However, when I use the effect, I get some pretty bad distortion: Here's the light map that goes with it -- is the static-like effect supposed to be like that? I've noticed that even if I'm looking at nothing, I still get the static-like effect. (you can see it in the screenshot; the top half doesn't have any geometry yet it still has the static-like effect) Also, does anyone have any advice on how to effectively debug shaders?

    Read the article

  • Calculating angle a segment forms with a ray

    - by kr1zz
    I am given a point C and a ray r starting there. I know the coordinates (xc, yc) of the point C and the angle theta the ray r forms with the horizontal, theta in (-pi, pi]. I am also given another point P of which I know the coordinates (xp, yp): how do I calculate the angle alpha that the segment CP forms with the ray r, alpha in (-pi, pi]? Some examples follow: I can use the the atan2 function.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • Is it possible for double-escaping to cause harm to the DB?

    - by waiwai933
    If I accidentally double escape a string, can the DB be harmed? For the purposes of this question, let's say I'm not using parametrized queries For example, let's say I get the following input: bob's bike And I escape that: bob\'s bike But my code is horrible, and escapes it again: bob\\\'s bike Now, if I insert that into a DB, the value in the DB will be bob\'s bike Which, while is not what I want, won't harm the DB. Is it possible for any input that's double escaped to do something malicious to the DB assuming that I take all other necessary security precautions?

    Read the article

  • Sanitizing DB inputs with XSLT

    - by azathoth
    Hello I've been looking for a method to strip my XML content of apostrophes (') like: <name> Jim O'Connor</name> since my DBMS is complaining of receiving those. By looking at the example described here, that is supposed to replace ' with '', I constructed the following script: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes" /> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*" /> </xsl:copy> </xsl:template> <xsl:template name="sqlApostrophe"> <xsl:param name="string" /> <xsl:variable name="apostrophe">'</xsl:variable> <xsl:choose> <xsl:when test="contains($string,$apostrophe)"> <xsl:value-of select="concat(substring-before($string,$apostrophe), $apostrophe,$apostrophe)" disable-output-escaping="yes" /> <xsl:call-template name="sqlApostrophe"> <xsl:with-param name="string" select="substring-after($string,$apostrophe)" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$string" disable-output-escaping="yes" /> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="node()|@*"> <xsl:apply-templates name="sqlApostrophe"/> </xsl:template> </xsl:stylesheet> However, the processor isn't accepting it. What am I missing here? Is there a better way to get rid of the apostrophes? Perhaps another approach for sanitizing DB inputs by using XSLT? Thanks for your help

    Read the article

  • Why does WebSharingAppDemo-CEProviderEndToEnd sample still need a client db connection after scope c

    - by Don
    I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet? The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong? SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(); builder.DataSource = hostName; builder.IntegratedSecurity = true; builder.InitialCatalog = "mydbname"; builder.ConnectTimeout = 1; provider.Connection = new SqlConnection(builder.ToString()); // provider.Connection.Open(); **** un-commenting this causes the code to work** //create anew scope description and add the appropriate tables to this scope DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName); //class to be used to provision the scope defined above SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning(); .... The error I get occurs in this part of the WCF code: public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData) { Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString); DbSyncContext dataRetriever = changeData as DbSyncContext; if (dataRetriever != null && dataRetriever.IsDataBatched) { string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString(); //Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges. //So look for it. //The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path string localBatchFileName = null; if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName)) { //Service has not received this file. Throw exception throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null)); } dataRetriever.BatchFileName = localBatchFileName; } Any ideas?

    Read the article

  • Parsing CSV File to MySQL DB in PHP

    - by Austin
    I have a some 350-lined CSV File with all sorts of vendors that fall into Clothes, Tools, Entertainment, etc.. categories. Using the following code I have been able to print out my CSV File. <?php $fp = fopen('promo_catalog_expanded.csv', 'r'); echo '<tr><td>'; echo implode('</td><td>', fgetcsv($fp, 4096, ',')); echo '</td></tr>'; while(!feof($fp)) { list($cat, $var, $name, $var2, $web, $var3, $phone,$var4, $kw,$var5, $desc) = fgetcsv($fp, 4096); echo '<tr><td>'; echo $cat. '</td><td>' . $name . '</td><td><a href="http://www.' . $web .'" target="_blank">' .$web.'</a></td><td>'.$phone.'</td><td>'.$kw.'</td><td>'.$desc.'</td>' ; echo '</td></tr>'; } fclose($file_handle); show_source(__FILE__); ?> First thing you will probably notice is the extraneous vars within the list(). this is because of how the excel spreadsheet/csv file: Category,,Company Name,,Website,,Phone,,Keywords,,Description ,,,,,,,,,, Clothes,,4imprint,,4imprint.com,,877-466-7746,,"polos, jackets, coats, workwear, sweatshirts, hoodies, long sleeve, pullovers, t-shirts, tees, tshirts,",,An embroidery and apparel company based in Wisconsin. ,,Apollo Embroidery,,apolloemb.com,,1-800-982-2146,,"hats, caps, headwear, bags, totes, backpacks, blankets, embroidery",,An embroidery sales company based in California. One thing to note is that the last line starts with two commas as it is also listed within "Clothes" category. My concern is that I am going about the CSV output wrong. Should I be using a foreach loop instead of this list way? Should I first get rid of any unnecessary blank columns? Please advise any flaws you may find, improvements I can use so I can be ready to import this data to a MySQL DB.

    Read the article

  • Parsing Data in XML and Storing to DB in Python

    - by Rakesh
    Hi Guys i have problem parsing an xml file and entering the data to sqlite, the format is like i need to enter the chracters before the token like 111,AAA,BBB etc <DOCUMENT> <PAGE width="544.252" height="634.961" number="1" id="p1"> <MEDIABOX x1="0" y1="0" x2="544.252" y2="634.961"/> <BLOCK id="p1_b1"> <TEXT width="37.7" height="74.124" id="p1_t1" x="51.1" y="20.8652"> <TOKEN sid="p1_s11" id="p1_w1" font-name="Verdanae" bold="yes" italic="no">111</TOKEN> </TEXT> </BLOCK> <BLOCK id="p1_b3"> <TEXT width="151.267" height="10.725" id="p1_t6" x="24.099" y="572.096"> <TOKEN sid="p1_s35" id="p1_w22" font-name="Verdanae" bold="yes" italic="yes">AAA</TOKEN> <TOKEN sid="p1_s36" id="p1_w23" font-name="verdanae" bold="yes" italic="no">BBB</TOKEN> <TOKEN sid="p1_s37" id="p1_w24" font-name="verdanae" bold="yes" italic="no">CCC</TOKEN> </TEXT> </BLOCK> <BLOCK id="p1_b4"> <TEXT width="82.72" height="26" id="p1_t7" x="55.426" y="138.026"> <TOKEN sid="p1_s42" id="p1_w29" font-name="verdanae" bold="yes" italic="no">DDD</TOKEN> <TOKEN sid="p1_s43" id="p1_w30" font-name="verdanae" bold="yes" italic="no">EEE</TOKEN> </TEXT> <TEXT width="101.74" height="26" id="p1_t8" x="55.406" y="162.026"> <TOKEN sid="p1_s45" id="p1_w31" font-name="verdanae" bold="yes" italic="no">FFF</TOKEN> </TEXT> <TEXT width="152.96" height="26" id="p1_t9" x="55.406" y="186.026"> <TOKEN sid="p1_s47" id="p1_w32" font-name="verdanae" bold="yes" italic="no">GGG</TOKEN> <TOKEN sid="p1_s48" id="p1_w33" font-name="verdanae" bold="yes" italic="no">HHH</TOKEN> </TEXT> </BLOCK> </PAGE> </DOCUMENT> in .net it is done with 3 foreach loops 1. for "DOCUMENT/PAGE/BLOCK" 2."TEXT" 3. "TOKEN" and then it is entered into the DB i dont get how to do it in python and i am trying it with lxml module

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >