Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 536/4078 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Caching the response of an ASP.NET HTTP Handler server and client side

    - by Bert Vandamme
    Is it possible to cache the response of a http handler on the server and on the client? This doesn't seem to be doing the trick: _context.Response.Cache.SetCacheability(HttpCacheability.Public); _context.Response.Cache.SetExpires(DateTime.Now.AddDays(7)); The _context is the HTTPContext passed as an argument to the ProcessRequest method on the IHttpHandler implementation. Any ideas? Update: The client does cache images that are loaded through the httphandler, but if another client does the same call, the server hasn't got it cached. So for each client that asks for the image, the server goes to the database (and filestream). If we use a aspx page instead of a httphandler together with a caching profile, then the images are cached both on the client and the server.

    Read the article

  • Updating entity fields in app engine development server

    - by Joey
    I recently tried updating a field in one of my entities on the app engine local dev server via the sdk console. It appeared to have updated just fine (a simple float). However, when I followed up with a query on the entity, I received an exception: "Items in the mSomeList list must all be Key instances". mSomeList is just another list field I have in that entity, not the one I modified. Is there any reason manually changing a field would adversely throw something off such that the server gets confused? Is this a known bug? I wrote an http handler to alter the field through server code and it works fine if I take that approach. Update: (adding details) I am using the python google app engine server. Basically if I go into the Google App Engine Launcher and press the SDK Console button, then go into one of my entities and edit a field that is a float (i.e. change it from 0 to 3.5, for instance), I get the "Items in the mMyList list must all be Key instance" suddenly when I query the entity like this: query = DataModels.RegionData.gql("WHERE mRegion = :1", region) entry = query.get() the RegionData entity is what has the mMyList member. As mentioned previously, if I do not manually change the field but rather do so through server code, i.e. query = DataModels.RegionData.gql("WHERE mRegion = :1", region) entry = query.get() entry.mMyFloat = 3.5 entry.put() Then it works.

    Read the article

  • Basic Client-Server Design for persistent connections?

    - by cam
    Here's as far as I understand it: Client & Server make connection Client sends server data Server interprets data, sends client data So on, and so forth, until client sends disconnect signal. I'm just wondering about implementation. Step 2 and 3 are confusing to me, maybe I'm over-complicating it. Is there anymore to interpreting the data than a giant switch statement? Any good books on client/server design? Specifically talking about multithreaded servers, scalability, and message design (byte 1 = header info, byte 2 = blah blah, etc)? Specifically geared towards C++.

    Read the article

  • Mixing .NET versions between website and virtual directories and the "server application unavailable" error Message

    - by Doug Chamberlain
    Backstory Last month our development team created a new asp.net 3.5 application to place out on our production website. Once we had the work completed, we requested from the group that manages are server to copy the app out to our production site, and configure the virtual directory as a new application. On 12/27/2010, two public 'Gineau Pigs' were selected to use the app, and it worked great. On 12/30/2010, We received notification by internal staff, that when that staff member tried to access the application (this was the Business Process Owner) they recieved the 'Server Application Unavailable' message. When I called the group that does our server support, I was told that it probably failed, because I didn't close the connections in my code. However, the same group went in and then created a separate app pool for this Extension Request application. It has had no issues since. I did a little googling, since I do not like being blamed for things. I found that the 'Server Application Unavailable' message will also appear when you have multiple applications using different frameworks and you do not put them in different application pools. Technical Details - Tree of our website structure Main Website <-- ASP Classic +-Virtual Directory(ExtensionRequest) <-- ASP 3.5 From our server support group: 'Reviewed server logs and website setup in IIS. Had to reset the application pool as it was not working properly. This corrected the website and it is now back online. We went ahead and created a application pool for the extension web so it is isolated from the main site pool. In the past we have seen other application do this when there is a connection being left open and the pool fills up. Would recommend reviewing site code to make sure no connections are being left open.' The Real Question: What really caused the failure? Isn't the connection being left open issue an ASP Classic issue? Wouldn't the ExtensionRequest application have to be used (more than twice) in the first place to have the connections left open? Is it more likely the failure is caused by them not bothering to setup the new Application in it's own App Pool in the first place? Sorry for the long windedness

    Read the article

  • Some images fails to load on Windows Server 2008

    - by Guffa
    I have an application running on a Windows Server 2008, that is processing uploaded images. Currently it successfully processes about 8000 images per day, creating 11 different sizes of each image. The problem that I have is that sometimes the application fails to load some images, I get the error "System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.". The upload only accepts files with a JPEG extension (jpg/jpeg/jpe) or with a JPEG MIME type, and from what I can tell those images are really JPEG images. If I look at the image file in windows explorer on the server, it can successfully extract the thumbnail from the file, but if I try to open it, I get the error message "This is not a valid bitmap file, or it's format is not currently supported." from Paint. If I copy the image to my own computer, running Windows 7, there is no problem opening the image. It works in Paint, Windows Photo Viewer, Adobe Bridge, and Photoshop. If I try to load the image using Image.FromStream the same way as in the application running on the server, it loads just fine. (I have copied the file back to the server, and it still doesn't work, so there is nothing in the copying process that changes it.) When I look at the image information in Bridge, I see that the images are created using Picasa 3.0, but other than that I can't see anything special about them. I haven't yet found anyone having the same problem, or any known problems like this with the Picasa application. Has anyone had any similar problem, or know if there is something special about images created using Picasa? Is there any image codec that needs installing on the server to handle all kinds of JPEG images?

    Read the article

  • Best Practices for a Web App Staging Server (on a budget)

    - by fig-gnuton
    I'd like to set up a staging server for a Rails app. I use git & github, Cap, and have a VPS with Apache/Passenger. I'm curious as to the best practices for a staging setup, as far as both the configuration of the staging server as well as the processes for interacting with it. I do know it should be as identical to the production server as possible, but restricting public access to it will limit that, so tips on securing it only for my use would also be great. Another specific question would be whether I could just create a virtual host on the VPS, so that the staging server could reside alongside the production one. I have a feeling there may be reasons to avoid this, though.

    Read the article

  • creating ports and initiating communication between client and server

    - by dave
    what i need is a server that listens to 5060 port , when the client sends data to that port the server should open up another port ( any port after 1250 i believe ) and forward the clients data to that port keeping 5060 idle so it can perform the same function for the next client so basically i need the server to a) open up multiple ports one for each client b) get the voice data from the client and be able to send voice data to that client i m looking into the hardware specs and other such details of the scenario so i dont have time to make such a program myself if theres a code that i can run directly ( both server and client side ) on visual studio .net 2010 that will perform these tasks then that would be extremely helpful thanks alot in advance

    Read the article

  • java cannot reserver heap size error on windows server

    - by Prad
    HI, I have the following configuration: Server : windows 2003 server (32 bit) java version: 1.5_0_22 I get the following error when executing from command line ( my code is based off eclipse wihch gives the same error) java -XX:MaxPermSize=256m -Xmx512m Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. The server has over 20GB physical memory with over 19 GB free right now. It does not give an error upto -Xmx486m I have read other articles about contiguous memory space. There is hardly anything running on this server. Can I validae this in any way? Thanks

    Read the article

  • Deployment of SSRS 2012 From SSDT Fails - The specified report server URL could not be found

    - by Tony Covarrubias
    I have recently installed SQL Server 2012 Developer on my laptop. I created a simple SSRS report using AdventureWorks201. The report builds fine but will not deploy. I get the error "The specified report server URL http:/localhost/Reports_MSSQLSERVER2012 could not be found" I am able to browse to said URL without issue. I can even upload the report and run it just fine. I am using an administrator account in both cases (on the URL and within SSDT). Is there some reason why SSDT cannot see the URL I am specifying?

    Read the article

  • Does my basic PHP Socket Server need optimization?

    - by Tom
    Like many people, I can do a lot of things with PHP. One problem I do face constantly is that other people can do it much cleaner, much more organized and much more structured. This also results in much faster execution times and much less bugs. I just finished writing a basic PHP Socket Server (the real core), and am asking you if you can tell me what I should do different before I start expanding the core. I'm not asking about improvements such as encrypted data, authentication or multi-threading. I'm more wondering about questions like "should I maybe do it in a more object oriented way (using PHP5)?", or "is the general structure of the way the script works good, or should some things be done different?". Basically, "is this how the core of a socket server should work?" In fact, I think that if I just show you the code here many of you will immediately see room for improvements. Please be so kind to tell me. Thanks! #!/usr/bin/php -q <? // config $timelimit = 180; // amount of seconds the server should run for, 0 = run indefintely $address = $_SERVER['SERVER_ADDR']; // the server's external IP $port = 9000; // the port to listen on $backlog = SOMAXCONN; // the maximum of backlog incoming connections that will be queued for processing // configure custom PHP settings error_reporting(1); // report all errors ini_set('display_errors', 1); // display all errors set_time_limit($timelimit); // timeout after x seconds ob_implicit_flush(); // results in a flush operation after every output call //create master IPv4 based TCP socket if (!($master = socket_create(AF_INET, SOCK_STREAM, SOL_TCP))) die("Could not create master socket, error: ".socket_strerror(socket_last_error())); // set socket options (local addresses can be reused) if (!socket_set_option($master, SOL_SOCKET, SO_REUSEADDR, 1)) die("Could not set socket options, error: ".socket_strerror(socket_last_error())); // bind to socket server if (!socket_bind($master, $address, $port)) die("Could not bind to socket server, error: ".socket_strerror(socket_last_error())); // start listening if (!socket_listen($master, $backlog)) die("Could not start listening to socket, error: ".socket_strerror(socket_last_error())); //display startup information echo "[".date('Y-m-d H:i:s')."] SERVER CREATED (MAXCONN: ".SOMAXCONN.").\n"; //max connections is a kernel variable and can be adjusted with sysctl echo "[".date('Y-m-d H:i:s')."] Listening on ".$address.":".$port.".\n"; $time = time(); //set startup timestamp // init read sockets array $read_sockets = array($master); // continuously handle incoming socket messages, or close if time limit has been reached while ((!$timelimit) or (time() - $time < $timelimit)) { $changed_sockets = $read_sockets; socket_select($changed_sockets, $write = null, $except = null, null); foreach($changed_sockets as $socket) { if ($socket == $master) { if (($client = socket_accept($master)) < 0) { echo "[".date('Y-m-d H:i:s')."] Socket_accept() failed, error: ".socket_strerror(socket_last_error())."\n"; continue; } else { array_push($read_sockets, $client); echo "[".date('Y-m-d H:i:s')."] Client #".count($read_sockets)." connected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } } else { $data = @socket_read($socket, 1024, PHP_NORMAL_READ); //read a maximum of 1024 bytes until a new line has been sent if ($data === false) { //the client disconnected $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } else { if ($data = trim($data)) { //remove whitespace and continue only if the message is not empty switch ($data) { case "exit": //close connection when exit command is given $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; break; default: //for experimental purposes, write the given data back socket_write($socket, "\n you wrote: ".$data); } } } } } } socket_close($master); //close the socket echo "[".date('Y-m-d H:i:s')."] SERVER CLOSED.\n"; ?>

    Read the article

  • MS Access: How to replace blank (null ) values with 0 for all records?

    - by rick
    MS Access: How to replace blank (null ) values with 0 for all records? I guess it has to be done using SQL. I can use Find and Replace to replace 0 with blank, but not the other way around (won't "find" a blank, even if I enter [Ctrl-Spacebar] which inserts a space. So I guess I need to do SQL where I find null values for MyField, then replace all of them with 0. Any help is greatly appreciated. Thanks, The Find & Replace guy.

    Read the article

  • Windows Server 2008 Antivirus Software with an API

    - by Dave Jellison
    I'm looking for an Antivirus package that is compliant with Windows Server 2008. That's not the hard part. What I need is an API layer on the Antivirus that i can call from managed .net code. For example: I am developing an Asp.Net (C#) website that allows users to upload files to the web server which the web site resides on. We have full control of the server so there are no security/rights issues on the server. I need to be able to run the antivirus algorithm on the newly uploaded files without (hopefully) shelling out to a command-ilne version of the software. Does anyone know of such a package?

    Read the article

  • Is there a way to customise messages produced by statements in MS SQL Query Analyzer?

    - by Scott Leis
    If I run a simple query in SQL Query Analyzer, like: SELECT * FROM TableName the Messages pane always produces a message like: (30 row(s) affected) If I run a stored procedure with many statements, the messages are useless because there's no indication of what each one relates to. So firstly: Is there a way to customise the default messages on a per-query basis? E.g. I'd like a specific query to produce a message like: TableName query produced [numRowsAffected] results. replacing [numRowsAffected] with the number that would have appeared in the default message. Secondly, is there a way to suppress the default messages on a per-query basis? E.g. I have a local variable of type TABLE, used in several statements. I don't want any message to appear for statements where I'm just deleting data from that variable before re-using it. I'm seeking solutions that work in SQL Server 8.0.

    Read the article

  • Applet in client-server infrastructure

    - by Andrey
    Hello! I have a general question concerning client-server design. We have a Java server with Spring, a GWT client program and some HTTP-servlets for our site. At the moment we also want to develop an applet which would communicate with that server in such a way GWT-client and site requests do. Is it a good idea to communicate with the server from applet by RMI? I.e. to create some Remote services, register them with Spring and call them from applet? Thanks in advance!

    Read the article

  • Is there a simple PHP development server?

    - by pinchyfingers
    When writing web apps in Python, it brain dead easy to run a development server. Django and Google App Engine both ship with simple servers. The main feature I'm looking for is no configuration. I want something like the GAE dev server where you just pass the directory of the app as a parameter when the server is started. Is there a reason that this is more difficult with PHP?

    Read the article

  • Metaprogramming on web server

    - by bobobobo
    From time to time, I find myself writing server code that produces JavaScript code as the output result. I can point out why it is really bad: Inextricable tie between server code and client code. Can render client code un-reusable. But sometimes, it just seems to make sense. And isn't it kinda sorta interesting? I guess the question is, is writing server code that produces JavaScript code a really bad practice, or "does everyone do it"?

    Read the article

  • Client-server application between two computers in the same network (using boost::asio)

    - by Edwin
    I'm trying to set up a basic communication between my desktop PC and my laptop (latter one using wireless connection) both being in the same network, using the boost::asio tutorials: synchronous client and synchronous server (in c++). When I run both the server and client on the same machine (using the localhost and the datetime port as parameters), it works splendidly. But if I try to set up the laptop as server (tested it with netstat -anb from the command prompt, it is indeed running and listening to port 13 as it's supposed to, and I even deactivated the firewall to make sure it doesn't cause any problems), I cannot connect to it with the client (set up on the PC), no matter what IP I tried (localhost, and basically any IPs that ipconfig -all gave me). So no matter what I tried, I cannot find the correct address that which the client can use to connect to the server. Could anyone help me please?

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • Register a dll in Windows Server 2003

    - by Karee
    I'm trying to register some DLLs for a particular application but having some problems. The DLLs are stored in the application folder in the D: drive. I run cmd.exe, go to the D: drive and run regsvr32, but it gives me the error: "The was loaded, but the DllRegisterServer entry point was not found. This file cannot be registered." After searching for other solutions, I also tried copying the DLLs into the C:\WINDOWS\system32 folder and running regsvr32 directly there but that didn't work either. It gave me the same error. I'm not very familiar with Windows Administration, so I may need detailed instructions/explanations. Any help would be greatly appreciated! Thank you very much!

    Read the article

  • Hyper-V Deployment Options Best Practices

    - by Erv Walter
    In what circumstances would you choose each of the following deployment options: Hyper-V installed as the bare bones Windows Hyper-V Server 2008 R2 Hyper-V role installed on a Windows Server 2008 R2 Server Core installation Hyper-V role installed on a Windows Server 2008 R2 Full Installation For example, I know there are licensing considerations for each option: With Hyper-V on top of a full installation of Enterprise or Data Center edition, you can use Windows Server as a guest OS without needing additional licenses (4 for Enterprise, unlimited for Data Center) With "Windows Hyper-V Server" you have to obtain licenses for each guest OS. But my real question is, are there technical considerations as well? I understand that the Full Installation doesn't perform as well as the other two options, but is there a significant difference between Server Core and "Windows Hyper-V Server"? What are the pros and cons of Hyper-V on Server Core vs "Windows Hyper-V Server" and when would you choose each?

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >