Search Results

Search found 57907 results on 2317 pages for 'database access'.

Page 248/2317 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Is it possible to run multiple mongod instances on a single set of database files

    - by 9point6
    We have large multi-gigabyte data sets on which we run very complex queries, for example { $or: [ { id: 30000001, ... }, { id: 30000005, ... }, ..., { id: 30001005, ... } ] } It seems that CPU is actually a bottleneck at this point, so I'd be advantageous to be able to run multiple mongod instances on the same set of database files. We've considered using replica sets to this end, but would prefer to not require the extra disk space simply for CPU reasons.

    Read the article

  • define variable in linux that can be access in php

    - by sweb
    I add a variable in whole linux varibale in /etc/profile export MYNAME="My Value" how can i access this value in php source code during run via apache web server? in $_SERVER this value doesn't exist. just this keys appear on $_ENV: _ENV["APACHE_RUN_DIR"] /var/run/apache2 _ENV["APACHE_PID_FILE"] /var/run/apache2.pid _ENV["PATH"] /usr/local/bin:/usr/bin:/bin _ENV["APACHE_LOCK_DIR"] /var/lock/apache2 _ENV["LANG"] C _ENV["APACHE_RUN_USER"] www-data _ENV["APACHE_RUN_GROUP"] www-data _ENV["APACHE_LOG_DIR"] /var/log/apache2 _ENV["PWD"] /

    Read the article

  • Is it possible to log the first line of the response in apache?

    - by Jeppe Mariager
    Hey, We have an Tomcat server where we're trying to log the HTTP version which the response is sent with. We've seen a few times that it seems to be HTTP/0.9, which kills the content (not supported I guess?). We would like to get some stats on this by using the access log in apache. However, since the header line for this isn't prefixed by anything, we cannot use the %{xxx}o logging. Is there a way to get this? An example: Response is: HTTP/1.1 503 This application is not currently available Server: Apache-Coyote/1.1 Content-Type: text/html;charset=utf-8 Content-Length: 1090 Date: Wed, 12 May 2010 12:53:16 GMT Connection: close And we'd like the catch HTTP/1.1 (alternatively, HTTP/1.1 503 This application is not currently available. Is this possible? We do not have access to the application being served, so we need to do this either as a Java filter, or in the tomcat access log - Preferably in the access log.

    Read the article

  • facebook access_token problem

    - by user559711
    Hi, I just wrote a little application(4 page php), everything work fine, however, I have a question that, do I need to create a new instance of facebook (just like $facebook = new facebook.....) in every new php page, or just pass a access token or session? If only pass the access token, how can I use the function $faceook-api('something'); to acheive the data? Because I'm a beginner of php, I have no idea how access token work, please help, thanks a lot! Regards, YK

    Read the article

  • Request data from database using Django

    - by user21901
    I have somehow two variables for example x and y. I have also made a model with 3 fields (longitude,latitude,name) and have it activated in mysql database. I need to send these two variables(x,y) to the django server so as to search if there is an object with longitude=x and latitude=y.If there is one i want to get back it's name. How can i do this?

    Read the article

  • Heroku Problem During Database Pull of Rails App: Mysql::Error MySQL server has gone away

    - by Rich Apodaca
    Attempting to pull my database from Heroku gives an error partway through the process (below). Using: Snow Leopard; heroku-1.8.2; taps-0.2.26; rails-2.3.5; mysql-5.1.42. Database is smallish, as you can see from the error message. Heroku tech support says it's a problem on my system, but offers nothing in the way of how to solve it. I've seen the issue reported before - for example here. How can I get around this problem? The error: $ heroku db:pull Auto-detected local database: mysql://[...]@localhost/[...]?encoding=utf8 Receiving schema Receiving data 17 tables, 9,609 records [...] /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:166:in `query': Mysql::Error MySQL server has gone away (Sequel::DatabaseError) from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:166:in `_execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:125:in `execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/connection_pool.rb:101:in `hold' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:461:in `synchronize' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:125:in `execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:296:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset.rb:276:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:365:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `each' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:144:in `transaction' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/connection_pool.rb:108:in `hold' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:461:in `synchronize' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:138:in `transaction' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:211:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:203:in `loop' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:203:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:196:in `each' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:196:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:175:in `cmd_receive' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:17:in `pull' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:119:in `taps_client' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:21:in `start' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:115:in `taps_client' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:16:in `pull' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:45:in `send' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:45:in `run_internal' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:17:in `run' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/heroku:14 from /usr/bin/heroku:19:in `load' from /usr/bin/heroku:19

    Read the article

  • Android Sqlite - obtaining the correct database row id

    - by Dan_Dan_Man
    I'm working on an app that allows the user to create notes while rehearsing a play. The user can view the notes they have created in a listview, and edit and delete them if they wish. Take for example the user creates 3 notes. In the database, the row_id's will be 1, 2 and 3. So when the user views the notes in the listview, they will also be in the order 1, 2, 3 (intially 0, 1, 2 before I increment the values). So the user can view and delete the correct row from the database. The problem arises when the user decides to delete a note. Say the user deletes the note in position 2. Thus our database will have row_id's 1 and 3. But in the listview, they will be in the position 1 and 2. So if the user clicks on the note in position 2 in the listview it should return the row in the database with row_id 3. However it tries to look for the row_id 2 which doesn't exist, and hence crashes. I need to know how to obtain the corresponding row_id, given the user's selection in the listview. Here is the code below that does this: // When the user selects "Delete" in context menu public boolean onContextItemSelected(MenuItem item) { AdapterContextMenuInfo info = (AdapterContextMenuInfo) item .getMenuInfo(); switch (item.getItemId()) { case DELETE_ID: deleteNote(info.id + 1); return true; } return super.onContextItemSelected(item); } // This method actually deletes the selected note private void deleteNote(long id) { Log.d(TAG, "Deleting row: " + id); mNDbAdapter.deleteNote(id); mCursor = mNDbAdapter.fetchAllNotes(); startManagingCursor(mCursor); fillData(); // TODO: Update play database if there are no notes left for a line. } // When the user clicks on an item, display the selected note protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); viewNote(id, "", "", true); } // This is where we display the note in a custom alert dialog. I've ommited // the rest of the code in this method because the problem lies in this line: // "mCursor = mNDbAdapter.fetchNote(newId);" // I need to replace "newId" with the row_id in the database. private void viewNote(long id, String defaultTitle, String defaultNote, boolean fresh) { final int lineNumber; String title; String note; id++; final long newId = id; Log.d(TAG, "Returning row: " + newId); mCursor = mNDbAdapter.fetchNote(newId); lineNumber = (mCursor.getInt(mCursor.getColumnIndex("number"))); title = (mCursor.getString(mCursor.getColumnIndex("title"))); note = (mCursor.getString(mCursor.getColumnIndex("note"))); . . . } Let me know if you would like me to show anymore code. It seems like something so simple but I just can't find a solution. Thanks!

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Command line scripts to restore the 4 system databases of MS SQL Server 2008

    - by ciscokid
    Hi there, can someone give me some advice on how to restore the 4 system databases (master, msdb, model, tempdb) of a sql server 2008 please? I've already done some testing myself (on restoring the master database) with the following commad line script as a result: ::set variables set dbname=master set dbdirectory=C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA title Restoring %dbname% database net stop mssqlserver cd C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn sqlservr -m sqlcmd -Slocalhost -E -Q "restore database master from disk='c:\master.bak' WITH REPLACE" net start mssqlserver pause After the execution of the 'sqlservr -m' command (used to start the server instance in single-user mode, which is only necessary when restoring the MASTER database), the script stops. So in order to execute the last 2 commands I need to separate the script into 2 smaller scripts, and run them one after the other. Does anyone has an idea on how I can merge them into one single script that runs completely without any interruption? I also want to restore the other 3 system databases using command line scripts like this one. Can someone please advice me how I need to go on? I've already noticed that restoring the temdb is not so easy, but there has to be a way... Looking forward to your advice!

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Which free RDBMS is best for small in-house development?

    - by Nic Waller
    I am the sole sysadmin for a small firm of about 50 people, and I have been asked to develop an in-house application for tracking job completion and providing reports based on that data. I'm planning on building it as a web application. I have roughly equal experience developing for MySQL, PostgreSQL, and MSSQL. We are primarily a Windows-based shop, but I'm fairly comfortable with both Windows and Linux system administration. These are my two biggest concerns: Ease of managability. I don't expect to be maintaining this database forever. For the sake of the person that eventually has to take over for me, which database has the lowest barrier to entry? Data integrity. This means transaction-safe, robust storage, and easy backup/recovery. Even better if the database can be easily replicated. There is not a lot of budget for this project, so I am restricted to working with one of the free database systems mentioned above. What would you choose?

    Read the article

  • Soapui & populating database before

    - by mada
    SoapUi & database data needed Hi, In a J2ee project( spring ws + hibernate +postgresql+maven 2 + tomcat),We have created webservices with full crud operations. We are currently testing them with SoapUi (not yet integrated in the integration-test maven phase ). We need tomcat to test them. Actually before the phase test, with a maven plugin, only one time, some datas are injected in the database before the launch of all tests through many sql files. But the fact is that to test some webservices i need some data entry & their Id (primary key) A- What is the best way to create them ? 1- populate one of the previous .sql test file. But now we have a big dependency where i have to be careful to use the same id use in a insert in the sql file in SoapUi too. And if any developper drop the value, the test will broke. 2- create amongst the .sql file test a soapui.sql where ALL data needed for the Sopaui test will be concentrated. The advantage is that the maintenance will be much more easy. 3-because i have Ws available for all crud operations, i can create before any testsuite a previous testsuite called setup-testsuiteX. In this one, i will use those Ws to inject datas in the databse & save id in variables thanks to the feature "property transfer" 4-launch the soapui test xml file with a junit java test & in the Setup Method (we are using spring test), we will populate the database with DbUnit. Drawback: they are already data in the database & some conflicts may appear due to constraint.And now i have a dependency between: - dbUnit Xml file (with the value primary key) - sopaui test where THOSe value of primarykey will be used advantage: A automatic rollback is possible for DbUnit Data but i dont know how it will react because those data have been used with the Ws. 5- your suggestion ? better idea ? Testing those Ws create datas in the DB. How is the best way to remove them at the end ? (in how daostest we were using a automatic rollback with spring test to let the database clean) Thanks in advance for your hep & sorry for my english, it is not my mother thongue. Regards.

    Read the article

  • How to insert a DataTable with existing Key to a SQL Server Table.

    - by user296575
    I am working with VB.NET.. i have a DataTable called "QUESTION", containing 3 fields: QuestionNumber (unique integer key) QuestionText QuestionType In my SQL Server database I created a Table called "QUESTION" with the same fields. QuestionNumber is defined as integer unique key, auto increment Now, when i make a bulk copy to insert the DataTable into the SQL Server, the database overwrites my QuestionNumber from the DataTable and generates new ones (starting from 1 increment 1). How do i have to change my database setup, that the original QuestionNumbers are copied into the database?

    Read the article

  • SQL Server 2008 Filestream Impersonation Access Denied error

    - by Adi
    I've been trying to upload a file to the database using SQL SERVER 2008 Filestream and Impersonation technique to save the file in the file system, but i keep getting Access Denied error; even though i've set the permissions for the impersonating user to the Filestream folder(C:\SQLFILESTREAM\Dev_DB). when i debugged the code, i found the server return a unc path(\Server_Name\MSSQLSERVER\v1\Dev_LMDB\dbo\FileData\File_Data\13C39AB1-8B91-4F5A-81A1-940B58504C17), which was not accessible through windows explorer. I've my web application hosted on local maching(Windows 7). SQL Server is located on a remote server(Windows Server 2008 R2). Sql authentication was used to call the stored procedure. Following is the code i've used to do the above operations. SqlCommand sqlCmd = new SqlCommand("AddFile"); sqlCmd.CommandType = CommandType.StoredProcedure; sqlCmd.Parameters.Add("@File_Name", SqlDbType.VarChar, 512).Value = filename; sqlCmd.Parameters.Add("@File_Type", SqlDbType.VarChar, 5).Value = Path.GetExtension(filename); sqlCmd.Parameters.Add("@Username", SqlDbType.VarChar, 20).Value = username; sqlCmd.Parameters.Add("@Output_File_Path", SqlDbType.VarChar, -1).Direction = ParameterDirection.Output; DAManager PDAM = new DAManager(DAManager.getConnectionString()); using (SqlConnection connection = (SqlConnection)PDAM.CreateConnection()) { connection.Open(); SqlTransaction transaction = connection.BeginTransaction(); WindowsImpersonationContext wImpersonationCtx; NetworkSecurity ns = null; try { PDAM.ExecuteNonQuery(sqlCmd, transaction); string filepath = sqlCmd.Parameters["@Output_File_Path"].Value.ToString(); sqlCmd = new SqlCommand("SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"); sqlCmd.CommandType = CommandType.Text; byte[] Context = (byte[])PDAM.ExecuteScalar(sqlCmd, transaction); byte[] buffer = new byte[4096]; int bytedRead; ns = new NetworkSecurity(); wImpersonationCtx = ns.ImpersonateUser(IMP_Domain, IMP_Username, IMP_Password, LogonType.LOGON32_LOGON_INTERACTIVE, LogonProvider.LOGON32_PROVIDER_DEFAULT); SqlFileStream sfs = new SqlFileStream(filepath, Context, System.IO.FileAccess.Write); while ((bytedRead = inFS.Read(buffer, 0, buffer.Length)) != 0) { sfs.Write(buffer, 0, bytedRead); } sfs.Close(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } finally { sqlCmd.Dispose(); connection.Close(); connection.Dispose(); ns.undoImpersonation(); wImpersonationCtx = null; ns = null; } } Can someone help me with this issue. Reference Exception: Type : System.ComponentModel.Win32Exception, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Access is denied Source : System.Data Help link : NativeErrorCode : 5 ErrorCode : -2147467259 Data : System.Collections.ListDictionaryInternal TargetSite : Void OpenSqlFileStream(System.String, Byte[], System.IO.FileAccess, System.IO.FileOptions, Int64) Stack Trace : at System.Data.SqlTypes.SqlFileStream.OpenSqlFileStream(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access) Thanks

    Read the article

  • Help with storing/accessing user access roles C# Winforms

    - by user222453
    Hello, firstly I would like to thank you in advance for any assistance provided. I am new to software development and have designed several Client/Server applications over the last 12 months or so, I am currently working on a project that involves a user logging in to gain access to the application and I am looking at the most efficient and "simple" method of storing the users permissions once logged in to the application which can be used throughout restricting access to certain tabs on the main form. I have created a static class called "User" detailed below: static class User { public static int _userID; public static string _moduleName; public static string _userName; public static object[] UserData(object[] _dataRow) { _userID = (int)_dataRow[0]; _userName = (string)_dataRow[1]; _moduleName = (string)_dataRow[2]; return _moduleName; } } When the user logs in and they have been authenticated, I wish to store the _moduleName objects in memory so I can control which tabs on the main form tab control they can access, for example; if the user has been assigned the following roles in the database: "Sales Ledger", "Purchase Ledger" they can only see the relevant tabs on the form, by way of using a Switch - Case block once the login form is hidden and the main form is instantiated. I can store the userID and userName variables in the main form once it loads by means of say for example: Here we process the login data from the user: DataAccess _dal = new DataAccess(); switch (_dal.ValidateLogin(txtUserName.Text, txtPassword.Text)) { case DataAccess.ValidationCode.ConnectionFailed: MessageBox.Show("Database Server Connection Failed!"); break; case DataAccess.ValidationCode .LoginFailed: MessageBox.Show("Login Failed!"); _dal.RecordLogin(out errMsg, txtUserName.Text, workstationID, false); break; case DataAccess.ValidationCode .LoginSucceeded: frmMain frmMain = new frmMain(); _dal.GetUserPrivList(out errMsg,2); //< here I access my DB and get the user permissions based on the current login. frmMain.Show(); this.Hide(); break; default: break; } private void frmMain_Load(object sender, EventArgs e) { int UserID = User._userID; } That works fine, however the _modules object contains mutiple permissions/roles depending on what has been set in the database, how can I store the multiple values and access them via a Switch-Case block? Thank you again in advance.

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • Should Databases be used just for persistence

    - by Raju
    A lot of web applications having a 3 tier architecture are doing all the processing in the app server and use the database for persistence just to have database independence. After paying a huge amount for a database, doing all the processing including batch at the app server and not using the power of the database seems to be a waste. I have a difficulty in convincing people that we need to use best of both worlds.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >