Search Results

Search found 57907 results on 2317 pages for 'database access'.

Page 227/2317 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Creating a layer of abstraction over the ORM layer

    - by Daok
    I believe that if you have your repositories use an ORM that it's already enough abstracted from the database. However, where I am working now, someone believe that we should have a layer that abstract the ORM in case that we would like to change the ORM later. Is it really necessary or it's simply a lot of over head to create a layer that will work on many ORM? Edit Just to give more detail: We have POCO class and Entity Class that are mapped with AutoMapper. Entity class are used by the Repository layer. The repository layer then use the additional layer of abstraction to communicate with Entity Framework. The business layer has in no way a direct access to Entity Framework. Even without the additional layer of abstraction over the ORM, this one need to use the service layer that user the repository layer. In both case, the business layer is totally separated from the ORM. The main argument is to be able to change ORM in the future. Since it's really localized inside the Repository layer, to me, it's already well separated and I do not see why an additional layer of abstraction is required to have a "quality" code.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Compute the AES-encryption key given the plaintext and its ciphertext?

    - by Null Pointers etc.
    I'm tasked with creating database tables in Oracle which contain encrypted strings (i.e., the columns are RAW). The strings are encrypted by the application (using AES, 128-bit key) and stored in Oracle, then later retrieved from Oracle and decrypted (i.e., Oracle itself never sees the unencrypted strings). I've come across this one column that will be one of two strings. I'm worried that someone will notice and presumably figure out what those two values to figure out the AES key. For example, if someone sees that the column is either Ciphertext #1 or #2: Ciphertext #1: BF,4F,8B,FE, 60,D8,33,56, 1B,F2,35,72, 49,20,DE,C6. Ciphertext #2: BC,E8,54,BD, F4,B3,36,3B, DD,70,76,45, 29,28,50,07. and knows the corresponding Plaintexts: Plaintext #1 ("Detroit"): 44,00,65,00, 74,00,72,00, 6F,00,69,00, 74,00,00,00. Plaintext #2 ("Chicago"): 43,00,68,00, 69,00,63,00, 61,00,67,00, 6F,00,00,00. can he deduce that the encryption key is "Buffalo"? 42,00,75,00, 66,00,66,00, 61,00,6C,00, 6F,00,00,00. I'm thinking that there should be only one 128-bit key that could convert Plaintext #1 to Ciphertext #1. Does this mean I should go to a 192-bit or 256-bit key instead, or find some other solution? (As an aside, here are two other ciphertexts for the same plaintexts but with a different key.) Ciphertext #1 A ("Detroit"): E4,28,29,E3, 6E,C2,64,FA, A1,F4,F4,96, FC,18,4A,C5. Ciphertext #2 A ("Chicago"): EA,87,30,F0, AC,44,5D,ED, FD,EB,A8,79, 83,59,53,B7.

    Read the article

  • Engineered Systems: Oracle schlägt drei Fliegen mit einer Klappe

    - by A&C Redaktion
    Die News aus dem Partnergeschäft von Oracle sorgen für Schlagzeilen im Magazin ChannelPartner. Über den neuen Fokus auf Engineered Systems und die SMB Appliances heißt es dort, so könne Oracle „drei Fliegen mit einer Klappe schlagen“: Erstens wird früheren Sun Hardware-Resellern der Einstieg ins Software-Geschäft erleichtert, zweitens bieten die Appliances neue Möglichkeiten für den Mittelstand und drittens bekräftigt die Strategie das zweistufige Channel-Modell. Dazu Silvia Kaske, Senior Director Channel Sales & Alliances Oracle Deutschland: "Wir stärken weltweit den Channel, weil das SMB-Geschäft zunehmend anzieht." Neben der durchaus positiven Wertung der Channel-Strategie bietet der Artikel einen anschaulichen Überblich darüber, was Engineered Systems eigentlich sind. Außerdem werden die Einsatzmöglichkeiten (Big Data, Mobile Computing, Cloud etc.) und Angebote von Oracle in diesem Bereich dargestellt und diskutiert. Das Highlight hierbei ist – wen wundert’s – die Oracle Database Appliance. Mit dem Portfolio wächst natürlich auch die Zahl der Spezialisierungen. Logisch, findet Silvia Kaske: "Endkunden erwarten keine Generalisten, sondern Spezialisten. Nur mit einem klaren Fokus wird der Partner erfolgreich sein". Hier geht’s zum vollständigen CP-Artikel unter dem Titel „Oracle lockt Channel mit SMB-Appliances“.

    Read the article

  • Real-time stock market application

    - by Sam
    I'm an amateur programmer. I'd like to develop a software application (like Tradestation), to analyse real-time market data. Please teach me if the following approach is correct, ie the procedures, knowledge or software needed etc: Use a DB to read the real-time feed from data provider: what should be the right DB to use? I know it should be a time serious one. Can I use SQL, Mysql, or others? What database can receive real-time data feed? Do I need to configure the DB to do this? If the real-time data is in ASCII form, how can it be converted to those that can be read by the DB and my application? Should I have to write codes or just use some add-ins? What kind of add-in are needed? How should I code the program to retrieve the changing data from the DB so that the analysis software screen data can also change asynchronously? (like the RTD in excel) Which aspects of programming do I need to learn to develop the above? Are there web resources/ books I can refer to for more information?

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • Is there any way to send a column value from outer query to inner sub query? [closed]

    - by chetan
    'Discussions' table schema title description desid replyto upvote downvote views browser used a1 none 1 1 12 - bad topic b2 a1 2 3 14 sql database a3 none 4 5 34 - crome b4 a3 3 4 12 The above table has two types of content types Main Topics and Comments. Unique content identifier 'desid' used to identify that its a main topic or a comment. 'desid' starts with 'a' for Main Topic and for comment 'desid' starts with 'b'. For comment 'replyto' is the 'desid' of main topic to which this comment is associated. I like to find out the list of the top main topics that are arranged on the basis of (upvote+downvote+visits+number of comments to it) addition. The following query gives top topics list in order of (upvote+downvote+visits) select * with highest number of upvote+downvote+views by query "select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by (upvote+downvote+visited) desc For (comments+upvote+downvote+views ) I tried select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by ((select count(*) from [DB_user1212].[dbo].[discussions] where replyto = desid )+upvote+downvote+visited) desc but it didn't work because its not possible to send desid from outer query to inner subquery. How to solve this? Please note that I want solution in query language only.

    Read the article

  • Algorithm for rating books: Relative perception

    - by suneet
    So I am developing this application for rating books (think like IMDB for books) using relational database. Problem statement : Let's say book "A" deserves 8.5 in absolute sense. In case if A is the best book I have ever seen, I'll most probably rate it 9.5 whereas for someone else, it might be just an average book, so he/they will rate it less (say around 8). Let's assume 4 such guys rate it 8. If there are 10 guys who are like me (who haven't ever read great literature) and they all rate it 9.5-10. This will effectively make it's cumulative rating greater than 9 (9.5*10 + 8*4) / 14 = 9.1 whereas we needed the result to be 8.5 ... How can I take care of(normalize) this bias due to incorrect perception of individuals. MyProposedSolution : Here's one of the ways how I think it could be solved. We can have a variable Lit_coefficient which tells us how much knowledge a user has about literature. If I rate "A"(the book) 9.5 and person "X" rates it 8, then he must have read books much better than "A" and thus his Lit_coefficient should be higher. And then we can normalize the ratings according to the Lit_coefficient of user. Could there be a better algorithm/solution for the same?

    Read the article

  • Data Synchronization in mobile apps - multiple devices, multiple users

    - by ProgrammerNewbie
    I'm looking into building my first mobile app. One of the core features of the application is that multiple devices/users will have access to the same data -- and all of them will have CRUD rights. I believe the architecture should involve a central server where all the data is stored. The devices will use an API to interact with the server to perform its data operations (e.g. adding a record, editing a record, deleting a record). I imagine a scenario where synchronizing the data will become a problem. Assume the application should work when it is not connected to the Internet, and thus cannot communicate with this central server. So: User A is offline and edits record #100 User B is offline and edits record #100 User C is offline and deletes record #100 User C goes online (presumably, record #100 should get deleted on the server) User A and B goes online, but the records they edited no longer exist All sorts of scenarios similar to the above can come up. How is this generally handled? I plan to use MySQL, but am wondering if it's not appropriate for such a problem.

    Read the article

  • How to Structure a Trinary state in DB and Application

    - by ABMagil
    How should I structure, in the DB especially, but also in the application, a trinary state? For instance, I have user feedback records which need to be reviewed before they are presented to the general public. This means a feedback reviewer must see the unreviewed feedback, then approve or reject them. I can think of a couple ways to represent this: Two boolean flags: Seen/Unseen and Approved/Rejected. This is the simplest and probably the smallest database solution (presumably boolean fields are simple bits). The downside is that there are really only three states I care about (unseen/approved/rejected) and this creates four states, including one I don't care about (a record which is seen but not approved or rejected is essentially unseen). String column in the DB with constants/enum in application. Using Rating::APPROVED_STATE within the application and letting it equal whatever it wants in the DB. This is a larger column in the db and I'm concerned about doing string comparisons whenever I need these records. Perhaps mitigatable with an index? Single boolean column, but allow nulls. A true is approved, a false is rejected. A null is unseen. Not sure the pros/cons of this solution. What are the rules I should use to guide my choice? I'm already thinking in terms of DB size and the cost of finding records based on state, as well as the readability of code the ends up using this structure.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • CMS and Databases vs. DIY

    - by hozza
    I have been programming for many years now, primarily in PHP and the like and would consider myself an intermediate programmer. Some of my online projects have now gone global and very widely used, i am now in deep thought about scalability etc. All of my systems so far are written in PHP, no known database structure such as MySQL etc. Instead our databases use an 'operating system style' method of storing information, files and folders if you will. We also do not use any outside/third-party software or CMS, so far this has work out extremely well. Most people, when they hear about the way we do things, criticize and say that is an idiotic idea but normally after seeing our systems in more dept are converted to our way of doing things. Is it really that bad to not use a standard databasing systems and only using the one (slightly heavier than others) language of PHP? How well on the face of it will this kind of setup scale? N.B. Our systems include things such as account and user management, documentation development and task/project managing.

    Read the article

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • DB Object passing between classes singleton, static or other?

    - by Stephen
    So I'm designing a reporting system at work it's my first project written OOP and I'm stuck on the design choice for the DB class. Obviously I only want to create one instance of the DB class per-session/user and then pass it to each of the classes that need it. What I don't know it what's best practice for implementing this. Currently I have code like the following:- class db { private $user = 'USER'; private $pass = 'PASS'; private $tables = array( 'user','report', 'etc...'); function __construct(){ //SET UP CONNECTION AND TABLES } }; class report{ function __construct ($params = array(), $db, $user) { //Error checking/handling trimed //$db is the database object we created $this->db = $db; //$this->user is the user object for the logged in user $this->user = $user; $this->reportCreate(); } public function setPermission($permissionId = 1) { //Note the $this->db is this the best practise solution? $this->db->permission->find($permissionId) //Note the $this->user is this the best practise solution? $this->user->checkPermission(1) $data=array(); $this->db->reportpermission->insert($data) } };//end report I've been reading about using static classes and have just come across Singletons (though these appear to be passé already?) so what's current best practice for doing this?

    Read the article

  • Scala - learning by doing

    - by wrecked
    coming from the PHP-framework symfony (with Apache/MySQL) I would like to dive into the Scala programming language. I already followed a few tutorials and had a look at Lift and Play. Java isn't a stranger to me either. However the past showed that it's easiest to learn things by just doing them. Currently we have a little - mostly ajax-driven - application build on symfony at my company. My idea is to just build a little project similar to this one which might gets into production in the future. The app should feature: High scalability and performance a backend-server web-interface and a GUI-client There are plenty of questions arising when I think of this. First of all: Whats the best way to accomplish a easy to maintain, structured base for it? Would it be best to establish a socket based communication between pure-scala server/client and accessing that data via Lift or is building a Lift-app serving as a server and connecting the gui-client (via REST?) the better way? Furthermore I wounder which database to choose. I'm familiar with (My)SQL but feel like a fool beeing confronted with all these things like NoSQL, MongoDB and more. Thanks in advance!

    Read the article

  • OTN???DB???????????????·Oracle ACE????????????????!

    - by OTN-J Master
    ??????????? ???????????????????????????????????????? ????????????????????????”??????”??????????????????????????????????????????????????????????????????????????????????????(1)?????????????????(????????)?(2)?????????????????????(???????)????????????2??????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????????????????? ????Oracle DBA & Developer Day??????????????????????????????????????????????????????????????????????????????????????????????OTN?????????????????? 2012???????????????????????????????? ?????????????? (??????) ?23????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? >> ??????????????????! DBA??? (??????) ?12? ???????????????????????(1)????2?????????????????????????????????????????????????????????????????????????????????????1???????1?????????????????????????????????????????1?????????????????????????????????????????????????????????????????????????????????????????????????????????????????I/O???????????????????????????????????????????????>> ????? ????????Oracle Database?SQL?? (Oracle ACE ?? ???) (??????) ~Oracle SQL???????????????? ~ SQL??????????????????????Oracle SQL??????????????????????????????SQL???????????????? SQL?????????????????????????SQL????????????????????? ?8? Pivot?UnPivot?1? Pivot?UnPivot?? (Pivot?UnPivot??/select???Pivot?UnPivot?????)?2? Pivot???? (Pivot????/Pivot????/Pivot??Pivot???????/Pivot??????)?3? UnPivot???? (UnPivot????/UnPivot????/UnPivot??UnPivot???????/UnPivot??????) >> ?????

    Read the article

  • Records not being saved to core data sqlite file

    - by esd100
    I'm a complete newbie when it comes to iOS programming and much less Core Data. It's rather non-intuitive for me, as I really came into my own with programming with MATLAB, which I guess is more of a 'scripting' language. At any rate, my problem is that I had no idea what I had to do to create a database for my application. So I read a little bit and thought I had to create a SQL database of my stuff and then import it. Long story short, I created a SQLite db and I want to use the work I have already done to import stuff into my CoreData database. I tried exporting to comma-delimited files and xml files and reading those in, but I didn't like it and it seemed like an extra step that I shouldn't need to do. So, I imported the SQLite database into my resources and added the sqlite framework. I have my core data model setup and it is setting up the SQLite database for the model correctly in the background. When I run through my program to add objects to my entities, it seems to work and I can even fetch results afterward. However, when I inspect the Core Data Database SQLite file, no records have been saved. How is it possible for it to fetch results but not save them to the database? - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ //load in the path for resources NSString *paths = [[NSBundle mainBundle] resourcePath]; NSString *databaseName = @"histology.sqlite"; NSString *databasePath = [paths stringByAppendingPathComponent:databaseName]; [self createDatabase:databasePath ]; NSError *error; if ([[self managedObjectContext] save:&error]) { NSLog(@"Whoops, couldn't save: %@", [error localizedDescription]); } // Test listing all CELLS from the store NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entityMO = [NSEntityDescription entityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; [fetchRequest setEntity:entityMO]; NSArray *fetchedObjects = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; for (CELL *cellName in fetchedObjects) { //NSLog(@"cellName: %@", cellName); } -(void) createDatabase:databasePath { NSLog(@"The createDatabase function was entered."); NSLog(@"The databasePath is %@ ",[databasePath description]); // Setup the database object sqlite3 *histoDatabase; // Open the database from filessytem if(sqlite3_open([databasePath UTF8String], &histoDatabase) == SQLITE_OK) { NSLog(@"The database was opened"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement = "SELECT * FROM CELL"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(histoDatabase)); } if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to cell MO array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { CELL *cellMO = [NSEntityDescription insertNewObjectForEntityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; if (sqlite3_column_type(compiledStatement, 0) != SQLITE_NULL) { cellMO.cellName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; } else { cellMO.cellName = @"undefined"; } if (sqlite3_column_type(compiledStatement, 1) != SQLITE_NULL) { cellMO.cellDescription = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; } else { cellMO.cellDescription = @"undefined"; } NSLog(@"The contents of NSString *cellName = %@",[cellMO.cellName description]); } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(histoDatabase); } I have a feeling that it has something to do with the timing of opening/closing both of the databases? Attached I have some SQL debugging output to the terminal 2012-05-28 16:03:39.556 MedPix[34751:fb03] The createDatabase function was entered. 2012-05-28 16:03:39.557 MedPix[34751:fb03] The databasePath is /Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/MedPix.app/histology.sqlite 2012-05-28 16:03:39.559 MedPix[34751:fb03] The database was opened 2012-05-28 16:03:39.560 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.575 MedPix[34751:fb03] CoreData: annotation: Connecting to sqlite database file at "/Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/Documents/MedPix.sqlite" 2012-05-28 16:03:39.576 MedPix[34751:fb03] CoreData: annotation: creating schema. 2012-05-28 16:03:39.577 MedPix[34751:fb03] CoreData: sql: pragma page_size=4096 2012-05-28 16:03:39.578 MedPix[34751:fb03] CoreData: sql: pragma auto_vacuum=2 2012-05-28 16:03:39.630 MedPix[34751:fb03] CoreData: sql: BEGIN EXCLUSIVE 2012-05-28 16:03:39.631 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.632 MedPix[34751:fb03] CoreData: sql: CREATE TABLE ZCELL ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZCELLDESCRIPTION VARCHAR, ZCELLNAME VARCHAR ) ... 2012-05-28 16:03:39.669 MedPix[34751:fb03] CoreData: annotation: Creating primary key table. 2012-05-28 16:03:39.671 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER) 2012-05-28 16:03:39.672 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_PRIMARYKEY(Z_ENT, Z_NAME, Z_SUPER, Z_MAX) VALUES(1, 'CELL', 0, 0) ... 2012-05-28 16:03:39.701 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB) 2012-05-28 16:03:39.702 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.703 MedPix[34751:fb03] CoreData: sql: DELETE FROM Z_METADATA WHERE Z_VERSION = ? 2012-05-28 16:03:39.704 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_METADATA (Z_VERSION, Z_UUID, Z_PLIST) VALUES (?, ?, ?) 2012-05-28 16:03:39.705 MedPix[34751:fb03] CoreData: sql: COMMIT 2012-05-28 16:03:39.710 MedPix[34751:fb03] CoreData: sql: pragma cache_size=200 2012-05-28 16:03:39.711 MedPix[34751:fb03] CoreData: sql: SELECT Z_VERSION, Z_UUID, Z_PLIST FROM Z_METADATA 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Beta Cell 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Gastric Chief Cell ... 2012-05-28 16:03:39.714 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.764 MedPix[34751:fb03] The createDatabase function has finished. Now fetching. 2012-05-28 16:03:39.765 MedPix[34751:fb03] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZCELLDESCRIPTION, t0.ZCELLNAME FROM ZCELL t0 2012-05-28 16:03:39.766 MedPix[34751:fb03] CoreData: annotation: sql connection fetch time: 0.0008s 2012-05-28 16:03:39.767 MedPix[34751:fb03] CoreData: annotation: total fetch execution time: 0.0016s for 0 rows. 2012-05-28 16:03:39.768 MedPix[34751:fb03] cellName: <CELL: 0x6bbc120> (entity: CELL; id: 0x6bbc160 <x-coredata:///CELL/t57D10DDD-74E2-474F-97EE-E3BD0FF684DA34> ; data: { cellDescription = "S cells are cells which release secretin, found in the jejunum and duodenum. They are stimulated by a drop in pH to 4 or below in the small intestine's lumen. The released secretin will increase the s"; cellName = "S Cell"; organs = ( ); specimens = ( ); systems = ( ); tissues = ( ); }) ... Sections were cut short to abbreviate. But note that the fetch results contain information, but it says that total fetch execution was for "0" rows? How can that be? Any help will be greatly appreciated, especially detailed explanations. :) Thanks.

    Read the article

  • What device can create wireless network while connecting to an ethernet router

    - by Nicolo
    Hi, I have access to an ethernet port of a wireless router. I simply connect my laptop to it via an ethernet cable. There are a total of four such ports on the wireless router. Now I want to connect a device (a wireless access point? wireless bridge? wireless switch?) via an ethernet cable to one of the other ethernet ports of the router. I want this device to act as a kind of wireless switch - it should "split" the ethernet connection coming from the router to two or more computers that connect to this device via a wireless. Basically, I have a wireless router with its wireless function switched off. I don't know the password for that router so can't activate the wireless function. Don't know the password of the ISP either. The only thing I can do is to connect via ethernet cable to the wireless router and this does not require a password. Now I want to use that connection and build a wireless upon it. What kind of device do I need? I am not really very well informed about network management and find the descriptions "wireless access point", "wireless bridge", "wireless switch" confusing. I know what an ethernet switch is - what I need is a device which would do the same but by allowing the clients to connect to it via a wireless. What kind of device would do that? Any recommendations about specific products?

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • apache authentication

    - by veilig
    I'm trying to set up a local webserver on my network. I want to be able to be able to access the webserver from any machine inside my network w/out authenticating. and two extra domains need access to it w/out authenticating. Everyone else I would like to authenticate in. so far, I can get to it from inside my network. and the two extra domains can access my webserver, but everyone else is just hanging. They don't get an authentication or anything. can anyone tell me what I'm doing wrong here? This is part of my apache's site-available file so far: <Directory /path/to/server/> Options Indexes FollowSymLinks -Multiviews Order Deny,Allow Deny from All Allow from 192.168 Allow from localhost Allow from domain1 Allow from domain2 AuthType Basic AuthName "my authentication" AuthUserFile /path/to/file Require valid-user Satisfy Any AllowOverride All <Files .htaccess> Order Allow,Deny Allow from All </Files> </Directory>

    Read the article

  • Suggestions for accessing SQL Server from internet

    - by Ian Boyd
    i need to be able to access a customer's SQL Server, and ideally their entire LAN, remotely. They have a firewall/router, but the guy responsible for it is unwilling to open ports for SQL Server, and is unable to support PPTP forwarding. The admin did open VNC, on a non-stanrdard port, but since they have a dynamic IP it is difficult to find them all the time. In the past i have created a VPN connection that connects back to our network. But that didn't work so well, since when i need access i have to ask the computer-phobic users to double-click the icon and press Connect i did try creating a scheduled task that attempts to keep the VPN connection back to our office up at all times by running: >rasdial "vpn to me" But after a few months the VPN connection went insane, and thought it was both, and neither, connected an disconnected; and the vpn connection wouldn't work again until the server was rebooted. Can anyone think of a way where i can access the customer's LAN that doesn't involve opening ports on the router needing to know their external IP customer interaction of any kind Blah blah blah use vpn vnc protocol has known weaknesses you are unwise to lower your defenses it's not wise to expose SQL Server directly to the internet you stole that line from Empire Customer doesn't care about any of that. Customer wants things to work.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >