Search Results

Search found 1505 results on 61 pages for 'postgresql 8 4'.

Page 48/61 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do software updates work on Ubuntu?

    - by Jonas
    I would like to know how software updates work for my Ubuntu Server 10.10. I have been recommended to use apt-get install for installing new software and apt-get update for updating software for a Ubuntu Server in production use. Because these packages are tested for Ubuntu in contrast to download source code and compile the software on the box. But on my Ubuntu Server 10.10, I don't get the latest stable version of PostgreSQL (9) or the latest stable version of Nginx (8) using apt-get install. So how is this working, will these software be updated when I later run apt-get update or do I have to later run apt-get install again, or do I have to wait for the next release of Ubuntu to get them? And are patches and security updates managed in the same way? Or can they be updated automatically? If there is such a setting, how do I check what my system is using?

    Read the article

  • How to install iodbc in Ubuntu 12.10

    - by Hanynowsky
    I need to install the library iodbc (depends on libodbc2) in my Quantal machine. Yet there is creepy dependency problem. This was replaced somehow by unixodbc which I don't have, installed. Here is what I get when I try to install : sudo apt-get install libiodbc2-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libiodbc2-dev : Depends: libiodbc2 (= 3.52.7-2build2) but it is not going to be installed Depends: iodbc (= 3.52.7-2build2) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I can tell that iodbc conflicts with odbcinst. Yet, I cannot remove it due to the following: sudo apt-get remove odbcinst Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: akonadi-backend-mysql calligra-l10n-engb kde-l10n-engb kdevelop-l10n kdevelop-php-docs-l10n kdevelop-php-l10n libakonadi-kabc4 libakonadi-notes4 libakonadiprotocolinternals1 libboost-thread1.49.0 libdmtx0a libgpgme++2 libkcalcore4 libkdgantt2 libkholidays4 libkimap4 libkldap4 libkmbox4 libkmime4 libkolabxml0 libkpgp4 libkresources4 libksieve4 libprison0 libqgpgme1 libqrencode3 libxerces-c3.1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: akonadi-server k3b k3b-i18n katepart kde-runtime kdelibs-bin kdelibs5-plugins kdepim-runtime kdepim-strigi-plugins kdepimlibs-kio-plugins kdoctools kget kmag kmail kmix kmousetool ksystemlog kubuntu-debug-installer language-pack-kde-ar language-pack-kde-en libakonadi-calendar4 libakonadi-contact4 libakonadi-kcal4 libakonadi-kde4 libakonadi-kmime4 libcalendarsupport4 libincidenceeditorsng4 libk3b6 libkabc4 libkactivities-bin libkactivities6 libkalarmcal2 libkatepartinterfaces4 libkcal4 libkcalutils4 libkcddb4 libkde3support4 libkdepim4 libkdepimdbusinterfaces4 libkdewebkit5 libkemoticons4 libkfile4 libkgapi0 libkhtml5 libkio5 libkleo4 libkmanagesieve4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkolab0 libkonq-common libkonq5abi1 libkontactinterface4 libkparts4 libkpimidentities4 libkpimtextedit4 libkpimutils4 libkprintutils4 libksieveui4 libktexteditor4 libktnef4 libktorrent4 libkworkspace4abi2 libkxmlrpcclient4 libmailcommon4 libmailimporter4 libmailtransport4 libmessagecomposer4 libmessagecore4 libmessagelist4 libmessageviewer4 libmicroblog4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomuksync4 libnepomukutils4 libplasma3 libsoprano4 libtemplateparser4 libvirtodbc0 nepomuk-core odbcinst odbcinst1debian2 plasma-scriptengine-javascript qapt-batch soprano-daemon virtuoso-minimal 0 upgraded, 0 newly installed, 89 to remove and 0 not upgraded. After this operation, 126 MB disk space will be freed. Do you want to continue [Y/n]? Other info: Why do the "iodbc" and "libmyodbc" packages conflict with each other? The root problem is that I need to use a new feature introduced in the latest MySQL Workbench builds (DB Migration) which uses odbc for that matter. Here is what Mysql doc says about it : Linux The Migration Wizard uses iODBC as a driver manager for all of its ODBC connections in Linux. This may give you some troubles because most Linux distributions provide ODBC drivers compiled against unixODBC. This is another driver manager not supported by MySQL Workbench so you won’t be able to use those drivers unless you compile them against iODBC. Here’s what you should do. Make sure that you have iODBC installed. If you are using Debian, Ubuntu or another Debian based distro, type this command in your terminal: $ sudo apt-get install iodbc libiodbc2-dev libpq-dev libssl-dev For RPM based distros (RedHat, Fedora, etc.) type this command instead of the previous one: $ sudo yum install iodbc iodbc-dev libpqxx-devel openssl-devel Now we need to install the PostgreSQL ODBC drivers. Download the psqlODBC source tarball from here. Use the latest version available for download. As of this writing the latest version corresponds to the file psqlodbc-09.01.0200.tar.gz. Extract this tarball to a directory in your hard drive and open a terminal and cd into that directory. Type this in the terminal window: $ ./configure --with-iodbc --enable-pthreads $ make $ sudo make install Verify that you have the file psqlodbcw.so in the /usr/local/lib directory. This package seems to pose the problem probably: dpkg -s odbcinst1debian2 Package: odbcinst1debian2 Status: install ok installed Priority: optional Section: libs Installed-Size: 241 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Multi-Arch: same Source: unixodbc Version: 2.2.14p2-5ubuntu4 Replaces: unixodbc (<< 2.1.1-2) Depends: libc6 (>= 2.14), libltdl7 (>= 2.4.2), odbcinst Pre-Depends: multiarch-support Breaks: libiodbc2, libmyodbc (<< 5.1.6-2), odbc-postgresql (<< 1:09.00.0310-1.1), tdsodbc (<< 0.82-8) Conflicts: odbcinst1, odbcinst1debian1 Description: Support library for accessing odbc ini files This package contains the libodbcinst library from unixodbc, a library used by ODBC drivers for reading their configuration settings from /etc/odbc.ini and ~/.odbc.ini. . Also contained in this package are the driver setup plugins, which describe the features supported by individual ODBC drivers. Homepage: http://www.unixodbc.org/ Original-Maintainer: Steve Langasek <[email protected]> I have no broken packages:

    Read the article

  • Visual NHibernate Update

    - by Ricardo Peres
    I have previously talked about Visual NHibernate. It has grown since last time, now offering support for multiple databases (SQL Server, Oracle, MySQL, PostgreSQL, Firebird), generates projects from existing databases or from existing Visual Studio projects and produces XML or Fluent mappings, to name just a few. To me it is by far the most interesting tools for working with NHibernate I know of (granted, I haven't tried NHibernate Profiler). For a limited period, Slyce Software is offering a 30% discount, until the final version is released, so you may want to have a look. Please note that I am in no way related to Slyce, but made some feature requests which have been implemented (thanks, Gareth!).

    Read the article

  • Which design better when use foreign key instead of a string to store a list of id

    - by Kien Thanh
    I'm building online examination system. I have designed to table, Question and GeneralExam. The table GeneralExam contains info about the exam like name, description, duration,... Now I would like to design table GeneralQuestion, it will contain the ids of questions belongs to a general exam. Currently, I have two ideas to design GeneralQuestion table: It will have two columns: general_exam_id, question_id. It will have two columns: general_exam_id, list_question_ids (string/text). I would like to know which designing is better, or pros and cons of each designing. I'm using Postgresql database.

    Read the article

  • What's the best practice for async APIs that return futures on Scala?

    - by Maurício Linhares
    I have started a project to write an async PostgreSQL driver on Scala and to be async, I need to accept callbacks and use futures, but then accepting a callback and a future makes the code cumbersome because you always have to send a callback even if it is useless. Here's a test: "insert a row in the database" in { withHandler { (handler, future) => future.get(5, TimeUnit.SECONDS) handler.sendQuery( this.create ){ query => }.get( 5, TimeUnit.SECONDS ) handler.sendQuery( this.insert ){ query => }.get( 5, TimeUnit.SECONDS ).rowsAffected === 1 } } Sending the empty callback is horrible but I couldn't find a way to make it optional or anything like that, so right now I don't have a lot of ideas on how this external API should look like. It could be something like: handler.sendQuery( this.create ).addListener { query => println(query) } But then again, I'm not sure how people are organizing API's in this regard. Providing examples in other projects would also be great.

    Read the article

  • Is it a bad practice to store large files (10 MB) in a database?

    - by B Seven
    I am currently creating a web application that allows users to store and share files, 1 MB - 10 MB in size. It seems to me that storing the files in a database will significantly slow down database access. Is this a valid concern? Is it better to store the files in the file system and save the file name and path in the database? Are there any best practices related to storing files when working with a database? I am working in PHP and MySQL for this project, but is the issue the same for most environments (Ruby on Rails, PHP, .NET) and databases (MySQL, PostgreSQL).

    Read the article

  • Why is the console hanging randomly?

    - by Josh M.
    Ubuntu 10.10 Server x64 installed as Virtual Box VM. Fresh install plus postgresql and tomcat6 installed via aptitude. Rebooted the server and now when I run some command the console hangs. For instance, I run "sudo shutdown now" and then nothing happens but I am not returned to the prompt. I hit CTRL+C and nothing happens except ^C appears on the following line. I can type whatever and it will show up inline. I switch to tty2 and try to login and I only get as far as [username][enter] and that console hangs. One other thing - after "sudo reboot" the console appears to hang (just like above) when shutting down tomcat6. Any idea what's going on or what I should check? Thanks!

    Read the article

  • How to improve performance of ubuntu server 10.04 for my dms system?

    - by prasanna
    I will be using one of the dms (document management system) which is java + jackrabbit + postgresql + jboss + openoffice based on ubuntu server 10.04. this is the only application i will running on my server. i want to speed up the performance of the system for this. can you give me tips for improvements of ubuntu server? can i change any settings which give fast system performance. My application will be used concurrently by around 70 - 80 people. We have total 600 users. they will constanly upload , download the files in dms. i am going to use dedicated dell server with minimum 4 gb of RAM. i appreciate help. thanks and regards, Prasanna

    Read the article

  • Developing a live video-streaming website

    - by cawecoy
    I'm a computer science student and know a little about some technology to start developing my website, like PHP, RubyOnRails and Python, and MySQL and PostgreSQL for Database. I need to know what are the best (secure, stable, low-price, etc) to get started, based on my business information: My website will be a live video-streaming one, similar to livestream.com We need to provide a secure service for our customers. They need to have a page to create and configure their own Live-Streaming-Videos, get statistics, etc. We work with Wowza Media Server ruuning on an Apache Server In addition, I would like to know some good practices for this kind of website development, as I am new to this. Thanks in advance!

    Read the article

  • SQLite with two python processes accessing it: one reading, one writing

    - by BBnyc
    I'm developing a small system with two components: one polls data from an internet resource and translates it into sql data to persist it locally; the second one reads that sql data from the local instance and serves it via json and a restful api. I was originally planning to persist the data with postgresql, but because the application will have a very low-volume of data to store and traffic to serve, I thought that was overkill. Is SQLite up to the job? I love the idea of the small footprint and no need to maintain yet another sql server for this one task, but am concerned about concurrency. It seems that with write ahead logging enabled, concurrently reading and writing a SQLite database can happen without locking either process out of the database. Can a single SQLite instance sustain two concurrent processes accessing it, if only one reads and the other writes? I started writing the code but was wondering if this is a misapplication of SQLite.

    Read the article

  • Habanero

    - by csharp-source.net
    An Enterprise Application Framework for .Net that is ideally suited for developing applications in an agile manner. The framework is used for producing an application from the data layer through to the front-end. Free open source under the LGPL license, it includes ORM, code generation and runtime UI generation to create one application for the desktop & web. Features: * ORM: Map database tables to objects in code * Persist property values to and from the database * Define all mapping in a single XML file * Switch between database vendors with one setting * Support for MySQL, MS Sql Server, MS Access, Oracle, PostgreSQL, SQLite, Firebird * FireStarter GUI class definitions xml manager * Generate user interfaces and map properties to controls * Develop for both desktop (with Windows Forms) and web (with Gizmox' Visual WebGUI) * Generate new projects and code files * Generate UI forms from templates * Reverse engineer class definitions from existing databases * Support variable data sources, including an in-memory database. Ships with Firestarter a free database reverse engineering, Domain Modelling and Code Generator.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • System Slow After Uprading Ubuntu

    - by Aragon N
    i have an ubuntu network machine which has release of 10.04.1 LTS Lucid. on this system i have apache, postgresql and django. for some app. development i have to install php and php-curl... due to being on network, i have exported wmvare machine to internet and firstly i have upgraded system and then install php5 packages on it. After all replacing it with its old place, i have considered that the new system query is some slow according to another. Old system query time : 140 ms New system query time : 9.11 s i have checked /etc/network interface and it seems there is no problem. i have checked /etc/resolv.conf and it seems ok i have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then i have checked time host -t A services.myapp.com and i got real 0m0.355s user 0m0.010s sys 0m0.020s and now what can i have to check for boosting my system as before?

    Read the article

  • What steps to take in resolving/fixing/optimizing a long boot, with possible looping errors as the culprit

    - by Tchalvak
    So my boot time has been slowing and slowing as time has gone on... I am running a number of services (e.g. apache/mysql, postgresql), but it has seen a drastic slowing lately, while I've only been applying updates as normal. I happened to check out my /var/log/boot.log and it is spammed with many lines of this: init: upstart-udev-bridge main process (2738) terminated with status 1 init: upstart-udev-bridge main process ended, respawning I wasn't able to find any solutions to that issue in google, or much talk of it at all, and I'm not really certain that error is the problem, but it is the only lead that I have. What steps should I go through to diagnose boot problems/a slow bootup?

    Read the article

  • How can I distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web Spring application running on a MySQL or PostgreSQL database. The traffic is becoming so high and the amount of data is becoming so big that a distributed database solution needs to be implemented to address scalability issue. Let's also assume this application is using Hibernate and the data access layer is cleanly separated with DAOs. Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. What would be the best strategy to scale this database? Is it possible to minimize sharding code (Shard) in the application?

    Read the article

  • Enhanced LINQ to SQL Compatible ORM Solution from Devart

    Devart has recently announced the release of LinqConnect - an enhanced LINQ to SQL compatible ORM solution with extended functionality, support for SQL Server, Oracle, MySQL, PostgreSQL, and SQLite, its own visual model designer, seamlessly integrating to Visual Studio, and SQL monitoring tool. LinqConnect allows you to quickly create mapping model and generate data access layer code for your application, greatly decreasing development time and eliminating the need to work over routine tasks. It...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Greatly Enhanced LINQ Capabilities in Devart ADO.NET Data Providers

    Devart has recently announced the release of dotConnect products for Oracle, MySQL, PostgreSQL, and SQLite - ADO.NET providers that offer Entity Framework support, LINQ to SQL support, and contain an ORM model designer for developing LINQ to SQL and EF models based on different database engines. New dotConnect ADO.NET Providers offer advanced LinqConnect ORM solution (formerly known as Devart LINQ support) closely compatible with Microsoft LINQ to SQL and having its own advanced features. Devart...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Alternative to | more to display error results page by page

    - by Lane
    The command psql -d template_postgis2 -f /usr/share/postgresql/9.1/contrib/postgis-2.1/postgis.sql returns a list of errors that is too long to be displayed by scrolling up to the beginning of the error. I tried the same command with "| more" and "| less" added up at the end of the command but it does not display the message page by page as it should. I also tried to put the output in a file with "> file.txt" but I do not get in this new file what is displayed on the screen!! I don't understand why. I guess i can't do this with a psql command?? Is there any other way to get all the error message? Thanks for your help!

    Read the article

  • Open source login solution

    - by David
    Authentication is such a general problem, which most websites have to implement. There are a few commercial solutions, but all lack sufficient functionality to customize the registration process. Therefore, I am looking for an open-source alternative. I am using PHP and with PostgreSQL as database, but as far as I understand one could utilize authentication solutions using other technologies and integrate them into our site in various ways. Therefore, I am looking for such solutions in any technology apart from those requiring Microsoft infrastructure... I would prefer Open Source solution, which have already implemented the following features: Has password recovery procedure Username is the email address of the user Has "Remember me" functionailty (meaning that the user is logged in automatically without seeing the login page) email address verification Google has gotten me nowhere on this and neither a search on this site...

    Read the article

  • Faut-il en finir avec la mode NoSQL ? Ou est-ce plus qu'une simple mode passagère ?

    Faut-il en finir avec la mode NoSQL ? Ou est-ce plus qu'une simple mode passagère ? La question est volontairement provocante. Elle est posée, en des termes encore plus crus, par Ted Dziuba dans un billet intitulé « I Can't Wait for NoSQL to Die ». « Certains ingénieurs pensent que l'évolutivité et l'architecture sont la solution [de tous les problèmes]. C'est comme cela qu'est né le mouvement NoSQL », y écrit-il. « L'idée développée avec NoSQL est que toutes les bases de données relationnelles, telles que MySQL et PostgreSQL, sont caduques et que les bases de données fondées sur des documents ou les bases de données sans schéma représentent l'avenir». ...

    Read the article

  • Visual Studio 2010 RC and Entity Framework 4 RC Support in the New Version of ADO.NET Data Providers

    Devart has recently announced the release of dotConnect products for Oracle, MySQL, PostgreSQL, and SQLite - ADO.NET providers that offer Entity Framework support, LINQ to SQL support, and contain an ORM model designer for developing LINQ to SQL and EF models based on different database engines. New dotConnect ADO.NET providers offer complete support for Visual Studio 2010 Release Candidate and Entity Framework 4 Release Candidate. Entity Developer 2.80, a designer for modeling and code generation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How hard it will be for the programmer to learn MS SSRS adn SSIS [closed]

    - by user75380
    I have a programming background in php/python/java for 5 years and I know MySQL and PostgreSQL. Currently in our company the MSQL Business Intelligence person is leaving his job in 4 months. I am thinking of trying to go to his place, at least try as I want to move in Business Intelligence field in SSRS and SSIS. I just want to know that is it possible for me to get my head around those things in 4 months because I have no idea how they work and how hard it will be for me to pick up those things. Can I do that? I just want to know from experienced people if I can move towards that field? At least how should I start? In my area there are shortages of person, so once I know the stuff I can get into junior jobs easily but I want to know from experienced people.

    Read the article

  • How to enable a Web portal-based enterprise platform on different domains and hosts without customization [on hold]

    - by S.Jalali
    At Coscend, a cloud and communications software product company, we have built a Web portal-based collaboration platform that we like to host on five different Windows- and Linux-based servers in different hosting environments that run Web servers. Each of these Windows and Linux servers has a different host name and domain name (and IP address). Our team would appreciate your guidance on: (1) Is there a way to implement this Web portal-based platform on these Linux and Windows servers without customizing the host name, domain name and IP address for each individual instance? (2) Is there a way to create some variables using JavaScript for host name and domain name and call them from the different implementations? If a reference to the host/domain names occurs on hundreds of our pages, the variables or objects would replace that. (3) This is part of making these JavaScript modules portable and re-usable for different environments and instances. The portal is written in JavaScript that is embedded in HTML5 and padded with CSS3. Other technologies include Flash, Flex, PostgreSQL and MySQL.

    Read the article

  • Sortie de la version 0.8 du serveur de mapping D2RQ supportant SPARQL 1.1 et de nouvelles bases relationnelles

    Nouvelle version de D2RQ Aujourd'hui est sortie la nouvelle version 0.8 du serveur de mapping D2RQ. L'outil recommande au minimum Java 5 pour fonctionner et est disponible sur son site officiel. D2RQ permet d'exposer des données relationnelles contenues dans des SGBD classiques (MySQL, Oracle, PostgreSQL, SQL Server, etc.) en tant que données liées sur le Web de données. D2RQ a pris son envol depuis environ trois semaines avec la forte participation au projet du groupe pharmaceutique UCB. La société biopharmaceutique UCB est axée sur la découverte et le...

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >