Search Results

Search found 10329 results on 414 pages for 'berkeley db je'.

Page 107/414 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • How do I start correctly in building database classes in c#?

    - by e4rthdog
    I am new in C# programming and in OOP. I need to dive into web applications for my company, and I need to do it fast and correct. So even that I know ASP.NET MVC is the way to go, I want to start with some simple applications with ASP.NET Webforms and then advance to MVC logic. Also regarding my db classes: I plan to create common database classes in order to be able to use them either from WinForms or ASP.NET applications. I also know that the way to go is to learn about ORM and EF. BUT I also want to start from where I am feeling comfortable and that is the traditional ADO.NET way. So about my Data Access Layer classes: Should I return my results in datasets or arraylists/lists? Should my methods do their own connect/disconnect from the db, or have separate methods and let the application maintain the connection?

    Read the article

  • The performance implications of IEnumerable vs. IQueryable

    It all started innocently enough. I was implementing a "Older Posts/Newer Posts" feature for my new web site and was writing code like this:IEnumerable<Post> FilterByCategory(IEnumerable<Post> posts, string category) {  if( !string.IsNullOrEmpty(category) ) { return posts.Where(p => p.Category.Contains(category)); }}...  var posts = FilterByCategory(db.Posts, category);  int count = posts.Count();... The "db" was an EF object context object, but it could just as...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The performance implications of IEnumerable vs. IQueryable

    It all started innocently enough. I was implementing a "Older Posts/Newer Posts" feature for my new web site and was writing code like this:IEnumerable<Post> FilterByCategory(IEnumerable<Post> posts, string category) {  if( !string.IsNullOrEmpty(category) ) { return posts.Where(p => p.Category.Contains(category)); }}...  var posts = FilterByCategory(db.Posts, category);  int count = posts.Count();... The "db" was an EF object context object, but it could just as...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Enabling EUS support in OUD 11gR2 using command line interface

    - by Sylvain Duloutre
    Enterprise User Security (EUS) allows Oracle Database to use users & roles stored in LDAP for authentication and authorization.Since the 11gR2 release, OUD natively supports EUS. EUS can be easily configured during OUD setup. ODSM (the graphical admin console) can also be used to enable EUS for a new suffix. However, enabling EUS for a new suffix using command line interface is currently not documented, so here is the procedure: Let's assume that EUS support was enabled during initial setup.Let's o=example be the new suffix I want to use to store Enterprise users. The following sequence of command must be applied for each new suffix: // Create a local database holding EUS context infodsconfig create-workflow-element --set base-dn:cn=OracleContext,o=example --set enabled:true --type db-local-backend --element-name exampleContext -n // Add a workflow element in the call path to generate on the fly attributes required by EUSdsconfig create-workflow-element --set enabled:true --type eus-context --element-name eusContext --set next-workflow-element:exampleContext -n // Add the context to a workflow for routingdsconfig create-workflow --set base-dn:cn=OracleContext,o=example --set enabled:true --set workflow-element:eusContext --workflow-name exampleContext_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:exampleContext_workflow -n // Create the local database for o=exampledsconfig create-workflow-element --set base-dn:o=example --set enabled:true --type db-local-backend --element-name example -n // Create a workflow element in the call path to the user data to generate on the fly attributes expected by EUS dsconfig create-workflow-element --set enabled:true --set eus-realm:o=example --set next-workflow-element:example --type eus --element-name eusWfe// Add the db to a workflow for routingdsconfig create-workflow --set base-dn:o=example --set enabled:true --set workflow-element:eusWfe --workflow-name example_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:example_workflow -n  // Add the appropriate acis for EUSdsconfig set-access-control-handler-prop \           --add global-aci:'(target="ldap:///o=example")(targetattr="authpassword")(version 3.0; acl "EUS reads authpassword"; allow (read,search,compare) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' dsconfig set-access-control-handler-prop \       --add global-aci:'(target="ldap:///o=example")(targetattr="orclaccountstatusevent")(version 3.0; acl "EUS writes orclaccountstatusenabled"; allow (write) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' Last but not least you must adapt the content of the ${OUD}/config/EUS/eusData.ldif  file with your suffix value then inport it into OUD.

    Read the article

  • What the best way to wire up Entity Framework database context (model) to ViewModel in MVVM WPF?

    - by hal9k2
    As in the question above: What the best way to wire up Entity Framework database model (context) to viewModel in MVVM (WPF)? I am learning MVVM pattern in WPF, alot of examples shows how to implement model to viewModel, but models in that examples are just simple classes, I want to use MVVM together with entity framework model (base first approach). Whats the best way to wire model to viewModel. Thanks for answers. //ctor of ViewModel public ViewModel() { db = new PackageShipmentDBEntities(); // Entity Framework generated class ListaZBazy = new ObservableCollection<Pack>(db.Packs.Where(w => w.IsSent == false)); } This is my usual ctor of ViewModel, think there is a better way, I was reading about repository pattern, not sure if I can adapt this to WPF MVVM

    Read the article

  • Why is it taking so long to open the Ubuntu Help Center?

    - by Agmenor
    When I click on the Help Center Icon in the 'System' menu, it takes more than a minute to launch the program. More than a minute, for a text only program seeming like a website! All my other programs work fine, and I saw this problem also on other computers. Is there a reason for this? Will it be fixed? I think it is an important issue for beginners. As a response to Scaine, the result of the command software-center is the following: Traceback (most recent call last): File "/usr/share/software-center/update-software-center-agent", line 72, in <module> db = xapian.WritableDatabase(pathname, xapian.DB_CREATE_OR_OVERWRITE) File "/usr/lib/python2.6/dist-packages/xapian.py", line 3195, in __init__ _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) xapian.DatabaseLockError: Unable to acquire database write lock on /home/agmenor/.cache/software-center/software-center-agent.db.tmp: already locked 2011-01-11 19:57:24,495 - softwarecenter.app - INFO - software-center-agent finished with status 1

    Read the article

  • Dynamic Bind9 + DHCP

    - by AcidRod75
    i have been working on setup a server for my internal network, so far i have a working isc-dhcp-server that can upgrade a chrooted BIND9 (on the same machine), i need to add some static entries on the DNS, so users can resolve the websites that resides in our DMZ. What i had tryed all ready was to modify the /etc/bind/named.conf.local with this info: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; key DHCP_UPDATER { algorithm HMAC-MD5.SIG-ALG.REG.INT; secret "MySuperSecretHash"; (this is not the real value BTW) }; zone "quality.internal" IN { type master; file "/var/lib/bind/quality.internal.db"; allow-update { key DHCP_UPDATER; }; }; zone "0.10.10.in-addr.arpa" { type master; file "/var/lib/bind/rev.10.10.0.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; }; logging { channel query.log { file "/var/log/named/query.log"; severity debug 3; }; category queries { query.log; }; }; --- EOF ---- then i added this 2 entries: zone "ourserver.internal" IN { type master; file "/var/lib/bind/ourserver.internal.db"; }; zone "0.16.172.in-addr.arpa" { type master; file "/var/lib/bind/rev.172.16.0.in-addr.arpa"; }; ---- EOF ---- So.. i created the files ourserver.internal.db and rev.172.16.0.in-addr.arpa placed them BOTH in /var/lib/bind/ and changed the permisions so the bind user can access them, restated the service... when i do a NSLOOKUP www.ourserver.internal i get: Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find www.ourserver.internal: NXDOMAIN BUT when i do a reverse lookup.... Server: 127.0.0.1 Address: 127.0.0.1#53 5.0.16.172.in-addr.arpa name = www.ourserver.internal I do not understand what's wrong. Some help with this will save me from installing a new DNS server at the DMZ JUST to host internal site names- TY in advance BTW: the server i'm using has Ubuntu Server 11.10 fully patched.

    Read the article

  • Does my approach for building a real time monitoring system make sense? [closed]

    - by sameer
    I am developing an application that will display a dashboard that will display data from different SQL databases. This needs to happen in almost real time, our refresh time is about 5 minutes. My approach so far is: Develop a Windows service to accumulate the data from various SQL Server instances. Persist those details into a SQL DB, from which the dashboard will display them on the web page. Trigger fetching of data from the Windows service will every x minutes. The details of the SQL Server instances will be stored in the SQL DB which the Windows service will be referring. Does my approach make sense?

    Read the article

  • Map, Set use cases in a general web app

    - by user2541902
    I am currently working on my own Java web app (to be shown in interview to get a Java job). So I've not worked on Java in professional environment, so no guidance. I have database, entity classes, JPA relationships. Use cases are like, user has albums, album has pics, user has locations, location has co-ordinates etc. I used List (ArrayList) everywhere. I can do anything with List and DB, get some entry, find etc. For example, I will keep the list of users in List, then use queries to get some entry (why would I keep them in Map with id/email as key?). I know very well the working and features, implementing classes of Map, Set. I can use them for solving some algorithm, processing some data etc. In interviews, I get asked have you worked with these, where have you used them etc. So, Please tell me cases where they should be used (DB or any popular real use case).

    Read the article

  • SS7(M3UA, SCCP, TCAP, MAP) Stack

    - by Ammar Hameed
    I'm building an open source SMSC from scratch; it's almost finished, The SRI and the forwardSM operations are working, but I still have few things to do for the receiving part. I've built the SS7 stack already, but I'm using DB for saving the TCAP transactions IDs to be updated later to get/generate responses. My approach is this: I created memory table (heap table), saved the TCAP TID in the database, then compared the received TCAP TID with the TIDs saved in the database and then decide whether to end the TCAP session or continue. What is the best way to implement it? I'm thinking of doubly linked list that holds the TCAP TID. Am I going towards the right direction, or should I use another technique other than database or D-linked list? Should I leave it as it is, and let the database do the job for saving the TIDs? Please note that I'm using SCTP implementation available on Linux (lsctp) as a transport protocol, the language I'm using is C and the DB is MYSQL.

    Read the article

  • What does private cloud Daas or DBaaS really mean ?

    - by llaszews
    Just had meeting with Fortune 1000 company regarding their private DBaaS or DaaS offering. Interesting to see what DBaaS really means to them: 1. Automated Database provisioning - Being able to 'one button' provision databases and database objects. This includings creating the database instance, creating database objects, network configuration and security provisioning. It is estimated that just being able to provision a new DB table in automated fashion will reduce time required to create a new DB table from 60 hours down to 8 hours. 2. Virtualization and blades - DBaaS infrastructure is all based upon VMs and blades. 3. Consolidation of database vendors - Moving from over ten database vendors down to three.

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Saving all hits to a web app

    - by bevanb
    Are there standard approaches to persisting data for every hit that a web app receives? This would be for analytics purposes (as a better alternative to log mining down the road). Seems like Redis would be a must. Is it advisable to also use a different DB server for that table, or would Redis be enough to mitigate the impact on the main DB? Also, how common is this practice? Seems like a no brainer for businesses who want to better understand their users, but I haven't read much about it.

    Read the article

  • Using static in PHP

    - by nischayn22
    I have a few functions in PHP that read data from functions in a class readUsername(int userId){ $reader = getReader(); return $reader->getname(userId); } readUserAddress(){ $reader = getReader(); return $reader->getaddress(userId); } All these make a call to getReader() { require_once("Reader.php"); static $reader = new Reader(); return $reader; } An overview of Reader class Reader{ getname(int id) { //if in-memory cache exists for this id return that //else get from db and cache it } getaddress(int id) { $this->getname(int id); //get address from name here } /*Other stuff*/ } Why is class Reader needed The Reader class does some in-memory caching of user details. So, I need only one object of class Reader and it will cache the user details instead of making multiple db calls. I am using static so that it the object gets created only once. Is this the right approach or should I do something else?

    Read the article

  • Is there other ways to do insert/update/delete on a remote oracle database?

    - by gunbuster363
    I asked a question recently concerning the speed of execution of insert/update/delete using JDBC driver in a remote machine, but the problem cannot be solved easily. I would like to ask, is there any other way to execute the insert/update/delete to the oracle? The current situation is this: the DB is on a seperate machine than the java program used to update the DB. I looked up the internet and found people suggesting using pure sql or pl/sql to do the update, is that possible? And do we need to operate the sql or pl/sql in a local machine? Because I have no knowledge about pl/sql, so I am not sure if we can create some kind of script and call it on a remote machine. Let say the situation is like this: the input data is on machine A, and the original java program are also on machine A, but the oracle is on machine B. is there any other approach other than JDBC?

    Read the article

  • MySQLwith mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • is it possible to get a programming job in the bay area?

    - by user475119
    Is it my perception or is it reality that it is harder to get software development work in Silicon Valley, than in other population centers? My theory is that some of the best and brightest (and youngest) are all here. All working for the top tech companies. And, little ol' me, over 40, math degree (not at Standford or Berkeley), ok at algorithms, and with many skills with many programming tools and concepts, but not published, and not top my class has a bit of a tough time competing. (I'm a Python programmer, by the way.) Thanks

    Read the article

  • Who created that user?

    - by AaronBertrand
    Twitter has provided some great fodder for blog content lately. And twice this week, I've found an excuse to take advantage of the default trace. Tonight @meltondba asked: I'm trying to find who created a user act in a DB It is true, SQL Server doesn't really keep track of who created objects, such as user accounts in a database. You can get some of this information from the default trace, though, since it tracks EventClass 109 (Audit Add DB User). If you run this code: USE [master] ; GO CREATE LOGIN...(read more)

    Read the article

  • Hybrid Columnar Compression

    - by user12620172
    You heard me in the past talk about the HCC feature for Oracle databases. Hybrid Columnar Compression is a fantastic, built-in, free feature of Oracle 11Gr2. One used to need an Exadata to make use of it. However, last October, Oracle opened it up and now allows it to work on ANY Oracle DB server running 11Gr2, as long as the storage behind it is a ZFSSA for DNFS, or an Axiom for FC. If you're not sure why this is so cool or what HCC can do for your Oracle database, please check out this presentation. In it, Art will explain HCC, show you what it does, and give you a great idea why it's such a game-changer for those holding lots of historical DB data. Did I mention it's free? Click here: http://hcc.zanghosting.com/hcc-demo-swf.html

    Read the article

  • MySQL with mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • how to serialize function depending on what instance of object calls it, if same instance call in a thread then do serialize else not

    - by LondonDreams
    I have a function which fetches and updates some record from db and I am trying to make sure each if the function is called by same instance of object(same Or different thread) then function should behave synchronized else its a call from different object instance function need not to be synchronized. I have tried it use a lock per client. That is, instead of synchronizing the method directly using explicit locking through lock objects using Map. function is like :- getAndUpdateMyHitCount(myObjId){ //go to db and get unique record by myObjId //fetch value , increment , save update } And this function may get call is same thread by different Or same object instance But as fetching and matching from Map is slow , Is there other optimized way to do this ? Found similar at this Question but dont feel that is optimized

    Read the article

  • PHP - Data Access Layer

    - by scarpacci
    I am currently reviewing a code base and noticed that a majority of the calls (along with DB connections) are just buried inside the PHP scripts. I would have assumed that like other languages they would have developed some sort of data access layer (Like I would do in .Net or Java) for all of the communication to the DB (or implemented MVC, etc). Is this still a common pattern in PHP or is there alternative methodologies/patterns for this technology? I am just trying to understand why the subs would have developed it this way. Any insight/info on how experienced developers design an approach data access in PHP would be very much appreciated.

    Read the article

  • Who created that user?

    - by AaronBertrand
    Twitter has provided some great fodder for blog content lately. And twice this week, I've found an excuse to take advantage of the default trace. Tonight @meltondba asked: I'm trying to find who created a user act in a DB It is true, SQL Server doesn't really keep track of who created objects, such as user accounts in a database. You can get some of this information from the default trace, though, since it tracks EventClass 109 (Audit Add DB User). If you run this code: USE [master] ; GO CREATE LOGIN...(read more)

    Read the article

  • Display large amount of data to client through pagination

    - by ebram tharwat
    I have a web application in which i need to show a big number of data or records for clients. Now i 'll use pagination but i was wondering should I: Load all the data once then pagination, sorting and sarching 'll be easy..But it 'll takes big time(using local DB it takes up to 9 sec.) Or each time i show new page(from the pagination) i make a new request to server and then new request to DB to get the next records..But then what if the client click on Prev button, i 'll make a new request to get data that I had previously..Should i cach data that are loaded before and how if that's good technique? So load all data once or make a new request every time i need data that maybe have been loaded before. I'm using ASP.NET MVC SPA with durandaljs and knockoutjs

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >