Search Results

Search found 22238 results on 890 pages for 'db security'.

Page 92/890 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • What are the pros and cons of using an in memory DB rather than a ThreadLocal

    - by Pangea
    We have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I don't like: 1) Over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesn't want to add this to the API). I wondering if this approach is good. What are the pros and cons? thx in advance.

    Read the article

  • Best way to auto-restore db on an houlry basis

    - by aron
    Hello, I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Rae Gate sql has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that red gate is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • Oracle XE as local recovery database and Oracle Standard as main db

    - by jbendahan
    Hi, I just wanted to know what you guys think about this. I have an app written in Visual Basic .Net as my front end and and Oracle 11g Standart database as the back-end. So I have like 20 pc's running this app locally. They're all inserting, updating, deleting data on this single database. I want to develop a solution in the case that the server database crashes or cannot stay on line. So i think of having oracle 10g XE on each pc. Thus all the data will be stored in the local db. I think about running a proccess once in a while (ex. every 15 minutes) to send/get the data to/from the main server. What do you think about this strategy?

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • mongoid with rails - Database should be a Mongo::DB, not NilClass"

    - by Adam T
    Greetings I am trying to get Mongoid to work with my Rails app and I am getting an error: "Mongoid::Errors::InvalidDatabase in 'Shipment bol should be unique' Database should be a Mongo::DB, not NilClass" I have created the mongoid.yml file in my config directory and have mongodb running as a daemon. The config file is like so: defaults: &defaults host: localhost development: <<: *defaults database: ship-it-development test: <<: *defaults database: ship-it-test production: <<: *defaults host: <%= ENV['MONGOID_HOST'] % port: <%= ENV['MONGOID_PORT'] % database: <%= ENV['MONGOID_DATABASE'] % All of my specs fail with the above error. I am using rails 2.3.8. Anyone have ideas?

    Read the article

  • Diagnosing Logon Audit Failure event log entries

    - by Scott Mitchell
    I help a client manage a website that is run on a dedicated web server at a hosting company. Recently, we noticed that over the last two weeks there have been tens of thousands of Audit Failure entries in the Security Event Log with Task Category of Logon - these have been coming in about every two seconds, but interesting stopped altogether as of two days ago. In general, the event description looks like the following: An account failed to log on. Subject: Security ID: SYSTEM Account Name: ...The Hosting Account... Account Domain: ...The Domain... Logon ID: 0x3e7 Logon Type: 10 Account For Which Logon Failed: Security ID: NULL SID Account Name: david Account Domain: ...The Domain... Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x154c Caller Process Name: C:\Windows\System32\winlogon.exe Network Information: Workstation Name: ...The Domain... Source Network Address: 173.231.24.18 Source Port: 1605 The value in the Account Name field differs. Above you see "david" but there are ones with "john", "console", "sys", and even ones like "support83423" and whatnot. The Logon Type field indicates that the logon attempt was a remote interactive attempt via Terminal Services or Remote Desktop. My presumption is that these are some brute force attacks attempting to guess username/password combinations in order to log into our dedicated server. Are these presumptions correct? Are these types of attacks pretty common? Is there a way to help stop these types of attacks? We need to be able to access the desktop via Remote Desktop so simply turning off that service is not feasible. Thanks

    Read the article

  • quering an external oracle db in rails application

    - by railscoder
    I have a website which useses a mysql database for its whole operation . But for a new requirement i need to query a external oracle database( used by other component) and compile a list of items and display in a page in the website. How is it possible to connect to a external database just for rendering a single page. And is it possible to cache the queried result for say 1 month before invalidating the cache and get the updated list of items. i dont want query the external oracle db for each request.

    Read the article

  • Programmatically sync the db in Django

    - by Attila Oláh
    I'm trying to sync my db from a view, something like this: from django import http from django.core import management def syncdb(request): management.call_command('syncdb') return http.HttpResponse('Database synced.') The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything? I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated.

    Read the article

  • Logging into oracle db as a global user

    - by kineas
    We are trying to shape up an old, 2 tier, Delphi based application. It originally uses database authentication, we'd like to transform the db user accounts to global users, so an OID server could perform the authentication instead of the database. The Delphi program can no longer log into the database if the account is a global user. I'm trying to understand the login protocol, so far without results. Similar thing happens with SQLDeveloper, I can't connect as a global user. SQLPlus however works with both kinds of users. We checked the information flow with Wireshark. When the dbserver asks back for a password, the SQLPlus sends it, while the SQLDeveloper doesn't send a password when attempting to connect as a global user. The client sends the application name too in the login request. Is it possible that we have to store the client app name in the LDAP itself?

    Read the article

  • "No database selected" even when db clearly selected

    - by Someone
    One of my webpages gets a recurring error: "No database selected", even though the DB is selected. Right about now it's a 50-50 chance whether the page will load just fine, or whether I receive this error. After one or two reloads, the page works again. I am including the exact same connection file on my other pages, and I don't have this problem. What could be the cause of this? I'm using ensim pro for webhosting. TIA.

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

  • performance of parameterized queries for different db's

    - by tuinstoel
    A lot of people know that it is important to use parameterized queries to prevent sql injection attacks. Parameterized queries are also much faster in sqlite and oracle when doing online transaction processing because the query optimizer doesn't have to reparse every parameterized sql statement before executing. I've seen sqlite becoming 3 times faster when you use parameterized queries, oracle can become 10 times faster when you use parameterized queries in some extreme cases with a lot of concurrency. How about other db's like mysql, ms sql, db2 and postgresql? Is there an equal difference in performance between parameterized queries and literal queries?

    Read the article

  • Nhibernate ValueType Collection as delimited string in DB

    - by JWendel
    Hi I have a legacy db that I am mapping with Nhibernate. And in several locations a list och strigs or domain objects are mapped as a delimited string in the database. Either 'string|string|string' in the value type cases and like 'domainID|domainID|domainID' in the references type cases. I know I can create a dummy property on the class and map to that fields but I would like to do it in a more clean way, like when mapping Enums as their string representation with the EnumStringType class. Is a IUserType the way to go here? Thanks in advance /Johan

    Read the article

  • Pull datasets from DB and manipulate into separate arrays

    - by dresdin
    My DB has four fields, date:date, morning:integer, afternoon:integer, evening:integer. Google charts needs the data split into one array per dataset (example below). What's the best way to do this? ['Day', 'Morning', 'Afternoon', 'Evening'], ['06/27/13', 1000, 400 234], ['06/28/13', 1170, 460 275], ['06/29/13', 660, 1120 377], ['06/30/13', 1030, 540 934] I've tried this without much luck: var data = google.visualization.arrayToDataTable([ <% @todo.each do |t| %> ['Day', 'Morning', 'Afternoon', 'Evening'], [<%= t.date.strftime("%m/%d/%y") %>, <%= t.morning %>, <%= t.afternoon %>, <%= t.evening %>] <% end %> ]);

    Read the article

  • Get a DB result with a value between two column values

    - by vitto
    Hi, I have a database situation where I'd like to get a user profile row by a user age range. this is my db: table_users username age email url pippo 15 [email protected] http://example.com pluto 33 [email protected] http://example.com mikey 78 [email protected] http://example.com table_profiles p_name start_age_range stop_age_range young 10 29 adult 30 69 old 70 inf I use MySQL and PHP but I don't know if there is some specific tacnique to do this and of course if it's possible. # so something like: SELECT * FROM table_profiles AS profiles INNER JOIN table_users AS users # can I do something like this? ON users.age IS BETWEEN profiles.start_age_range AND profiles.stop_age_range

    Read the article

  • Trigger to update data in another DB

    - by Permana
    I have the following schema: Database: test. Table: per_login_user, Field: username (PK), password Database: wavinet. Table: login_user, Field: username (PK), password What I want to do is to create a trigger. Whenever a password field on table per_login_user in database test get updated, the same value will be copied to field password in Table login_wavinet in database wavinet I have search trough Google and find this solution: http://forums.devshed.com/ms-sql-development-95/use-trigger-to-update-data-in-another-db-149985.html But, when I run this query: CREATE TRIGGER trgPasswordUpdater ON dbo.per_login_user FOR UPDATE AS UPDATE wavinet.dbo.login_user SET password = I.password FROM inserted I INNER JOIN deleted D ON I.username = D.username WHERE wavinet.dbo.login_wavinet.password = D.password the query return error message: Msg 107, Level 16, State 3, Procedure trgPasswordUpdater, Line 4 The column prefix 'wavinet.dbo.login_wavinet' does not match with a table name or alias name used in the query.

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • Can't get values from Sqlite DB using query

    - by Sana Joseph
    I used sqlite to populate a DB with some Tables in it. I made a function at another javascript page that executes the database & selects some values from the table. Javascript: function GetSubjectsFromDB() { tx.executeSql('SELECT * FROM Subjects', [], queryNSuccess, errorCB); } function queryNSuccess(tx, results) { alert("Query Success"); console.log("Returned rows = " + results.rows.length); if (!results.rowsAffected) { console.log('No rows affected!'); return false; } console.log("Last inserted row ID = " + results.insertId); } function errorCB(err) { alert("Error processing SQL: "+err.code); } Is there some problem with this line ? tx.executeSql('SELECT * FROM Subjects', [], queryNSuccess, errorCB); The queryNSuccess isn't called, neither is the errorCB so I don't know what's wrong.

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • How to open db.sqlite in Aloha Editor?

    - by Mariusz Poplawski
    I'm using Aloha Editor and I'm able to save/edit content without any problem. Editor saves content in SQLite database (db.sqlite). I know where the file is and I see that's getting bigger while I'm adding more text to it. But when I transfer that fille using filezilla to local computer and I open in notepad I see only: ** This file contains an SQLite 2.1 database ** I've tried to use few programs but it always says that database that unable to open. The programs I've tried: Sqliteman-1.2.2 and sqlitebrowser_200_b1_win.

    Read the article

  • Flexibility starting to develop applications and DB

    - by kristaps
    Hi! First of all I want to say some info for admins. This post can help to all DBA and developers. Hope You don't close this. I'm writing about widespread problem I'm writing master work and I want that You will had little time to answer my questions in my form. These questions are about db structures. Visit here: http://spreadsheets.google.com/embeddedform?formkey=dEhmOWFGb2twWVFreWJwWXp2U3c5a1E6MQ After I summarized results, I insert it in new post and developers can choose which structure to use to reach needed requirements developing new applications. I think that my test will help a lot of people. Best regards, Kristaps

    Read the article

  • DB function failed with error number 1 in joomla admin panel

    - by sabuj
    When i access joomla article manager or module manager then i had faced the bellow output: 500 - An error has occurred! DB function failed with error number 1 Can't create/write to file '/tmp/#sql_57c0_0.MYD' (Errcode: 17) SQL=SELECT c.*, g.name AS groupname, cc.title AS name, u.name AS editor, f.content_id AS frontpage, s.title AS section_name, v.name AS author FROM jos_content AS c LEFT JOIN jos_categories AS cc ON cc.id = c.catid LEFT JOIN jos_sections AS s ON s.id = c.sectionid LEFT JOIN jos_groups AS g ON g.id = c.access LEFT JOIN jos_users AS u ON u.id = c.checked_out LEFT JOIN jos_users AS v ON v.id = c.created_by LEFT JOIN jos_content_frontpage AS f ON f.content_id = c.id WHERE c.state != -2 ORDER BY section_name , section_name, cc.title, c.ordering LIMIT 0, 20

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • Get a DB result with a value between two columns values

    - by vitto
    Hi, I have a database situation where I'd like to get a user profile row by a user age range. this is my db: table_users username age email url pippo 15 [email protected] http://example.com pluto 33 [email protected] http://example.com mikey 78 [email protected] http://example.com table_profiles p_name start_age_range stop_age_range young 10 29 adult 30 69 old 70 inf I use MySQL and PHP but I don't know if there is some specific tacnique to do this and of course if it's possible. # so something like: SELECT * FROM table_profiles AS profiles INNER JOIN table_users AS users # can I do something like this? ON users.age IS BETWEEN profiles.start_age_range AND profiles.stop_age_range

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >