Search Results

Search found 61580 results on 2464 pages for 'document based database'.

Page 199/2464 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • Best console based text editor not only for programmers [closed]

    - by robo
    I need console based text editor for writing both source codes and human readable texts such as emails. I need it to be user friendly. It mean for me: You can use it the same way as the notepad or gedit. You can use mouse there. If you need your mother of girlfriend or somebody to edit your text they will know what to do, they will not realize it is a console and will have only a feeling it is something like a notepad. copy, paste, undo works as usual with usual key combinations (Ctrl-C, Ctrl-V, Ctrl-Z). shift and arrows works as usual. They select the text. And when I return to the computer I want to use the text editor for programming. I expect: Syntax highliting auto indenting replacing spaces with tabs keyboard shortcuts for compiling possibility to configure it to use a debugger autocompletions for c#, java, c++ and other languages other things I expect from IDE's. I was working and configuring vim for a few years. But It never fulfilled all of my expectations (but it almost did). I thing I could get vim configured perfectly if I had few more weeks time for configurating it. Unfortunately I cannot afford to be configuring vim forever. Is there other alternative? Hopefully some editor I once set up and it will works forever? What do you use? I often hear people are using emacs. Is it worth learning?

    Read the article

  • Localized database for customers

    - by Jim
    The company I work for, has just moved to AWS - and currently they have one very large central database with the instance currently located in America. However, one of their clients has requested that all of their data is held in the EU. So creating an AWS instance in Ireland isn't a problem, the problem is the database and how to manage it. We were considering having another database that runs in the EU for European customers, and use a different primary key step, so that the primary keys will never conflict in case the two locations need to be merged in the future. The problem is, if we have a customer that uses our system in both America, and the EU we would have to create 2 accounts for that user, and reporting across both regions would not be possible as the connection time would be too high. Is there an alternative to set this up?

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Linux port-based routing using iptables/ip route

    - by user42055
    I have the following setup: 192.168.0.4 192.168.0.6 192.168.0.1 +-----------+ +---------+ +----------+ |WORKSTATION|------| LINUX |------| GATEWAY | +-----------+ +---------+ +----------+ 192.168.150.10 | 192.168.150.9 +---------+ | VPN | +---------+ 192.168.150.1 WORKSTATION has a default route of 192.168.0.6 LINUX has a default route of 192.168.0.1 I am trying to use the gateway as the default route, but route port 80 traffic via the VPN. Based on what I read at http://www.linuxhorizon.ro/iproute2.html I have tried this: echo "1 VPN" >> /etc/iproute2/rt_tables sysctl net.ipv4.conf.eth0.rp_filter = 0 sysctl net.ipv4.conf.tun0.rp_filter = 0 sysctl net.ipv4.conf.all.rp_filter = 0 iptables -A PREROUTING -t mangle -i eth0 -p tcp --dport 80 -j MARK --set-mark 0x1 ip route add default via 192.168.150.9 dev tun0 table VPN ip rule add from all fwmark 0x1 table VPN When I run "tcpdump -i eth0 port 80" on LINUX, and open a webpage on WORKSTATION, I don't see the traffic go through LINUX at all. When I run a ping from WORKSTATION, I get this back from some packets: 92 bytes from 192.168.0.6: Redirect Host(New addr: 192.168.0.1) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 de91 0 0000 3f 01 4ed3 192.168.0.4 139.134.2.18 Is this why my routing is not working ? Do I need to put GATEWAY and LINUX on different subnets to prevent WORKSTATION being redirected to GATEWAY ? Do I need to use NAT at all, or can I do this with routing alone (which is what I want) ?

    Read the article

  • Denying access to website via htaccess based on http header

    - by neekster
    I've been trying for ages to get this to work and I can't put my finger on it. What I'm trying to do is block access to a site from a number of countries, based on the CF-IPCountry header added by CloudFlare. I figured htaccess was a suitable way to do this. We are running LiteSpeed 4.2.4 on top of DirectAdmin for a control panel. The problem we having is the htaccess rule doesn't seem to do anything. Here's the rule we tried: SetEnvIf CF-IPCountry AU UnwantedCountry=1 Order allow,deny Deny from env=UnwantedCountry Allow from all That makes no difference at all, connections are still accepted. Just to check that the rule was at least being processed, I changed Allow from all to Deny from all, and connections were refused. So it appears to be a problem wit the variable. Here's the relevant headers that come in with the request. Connection: Keep-Alive Accept-Encoding: gzip CF-Connecting-IP: xx.xx.xx.xx CF-IPCountry: AU X-Forwarded-For: xx.xx.xx.xx.xx CF-RAY: c9062956e2d04b6 X-Forwarded-Proto: http CF-Visitor: {"scheme":"http"} Zone-Name: xx.com.au Hopefully someone can help me out, this has been driving me nuts for too long. Thanks

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Delete cell content in Libre (Open) Office based on the cell value

    - by take2
    I have a huge csv file (tens of thousands of rows) that I need to filter based on different criteria. After trying to find a proper CSV editor, I decided to use LibreOffice Calc. CSVed is great, but it doesn't support neither UTF-8 nor macros for advanced filtering. So, there are 4 columns, 3 of which contain numbers (with decimal numbers) and 1 of which contains text. I'm trying to find a way to delete rows with a macro code. I can achieve the desired behavior with filters too, but it's annoying to type all of the filtering values over and over again and there doesn't seem to be a way to export the filter and us it repeatedly. These rows should be deleted: The ones that don't contain certain words in textual column (column A). There are a few thousand different words used in that column and I want to keep only the rows that contain one of about 30 words in that column. Additionally, the number is the other columns should be bigger than 3.8 (column B), 4.5 (column C) and smaller than 20 (column C). The row-deletion type is "Shift up". Hopefully I have explained it well. Thanks a lot in advance for your help!

    Read the article

  • Change source address based on destination IP

    - by hgj
    We have several "router" machines that gather a lot of external IP addresses on the same host and redirect, NAT or proxy the traffic to the internal network. They also act as routers for the machines on the internal network. This works fine, however I am unable to make the routing table, so I can change the source address, based on the destination a machine from the internal network want to access. Let's say I have a router, that has public addresses P1 (5.5.5.1/24) and P2 (5.5.5.2/24). All traffic goes through P1, but if necessary, the host is reachable on P2 too. This looks like this and works fine: > ip addr ... 1: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether aa:bb:cc:dd:ee:11 brd ff:ff:ff:ff:ff:ff inet 5.5.5.1/24 brd 5.5.5.255 scope global eth1 inet 5.5.5.2/24 brd 5.5.5.255 scope global secondary eth1:p2 ... Now I want to use P2 as the source address, if I want to access the Google DNS service for example (8.8.8.8). So I add a row in the routing table like: > ip route add 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 > ip route ... default via 5.5.5.254 dev eth1 5.5.5.0/24 dev eth1 proto kernel scope link src 5.5.5.1 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 ... But this does not work. If I ping 8.8.8.8, the host still uses P1 as the source address, and does not use P2 at all for outgoing connections. Am I doing it right? I guess not...

    Read the article

  • Upgraded to Ubuntu 12.04 from 10.04 and have to transfer database from Postgresql 8.4 to 9.1

    - by Stpn
    I upgraded server with a Rails application to Ubuntu 12.04 from 10.04 and cannot connect to Postgresql database now... Here is the error message from Rails app: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" Also the pg_ctl start is not recognized as a command.. EDIT: Turns out my database in on Postgresl 8.4 and my sever is now running on 9.1. So all the database files / configs are on 8.4.. How can I transfer them? Just straight copy from old pg_hba.conf?

    Read the article

  • Installing Drupal: Database configuration problem.

    - by abelenky
    I am trying to install Drupal 6.16 on a clean website. I get through the "Verify Requirements" page easily. On the Database Configuration, I supply all the proper info, but "Save and Continue" returns me back to the same page, with no error message. I am unable to proceed past this point. I've verified my info with the ISP, including a non-local database host (under Advanced Options), and that the database user has full DBA rights. The lack of an error message is particularly frustrating. Do you have any ideas what the problem is, or how to pursue it and resolve it?

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • How can you exclude folders from appearing in the Recent Items feature of Windows 7 start menu?

    - by Jordan Weinstein
    to be clear, I like the 'recent items' feature. I do not want to turn it off. I work at a law firm where we integrate Office with a document management system (DMS). If recent items are turned on, those DMS opened documents will show up in the recent items of a Windows 7 start menu when hovering over Word (or Excel\PPT etc). However the integration doesn't work correctly so if a user were to click on one of those, something wouldn't work right. In short, we've always needed to turn off Recent Items completely for a DMS integrated workstation. Curious if anyone knows of a way to exclude a directory from being "captured" so to speak. When you open a DMS document, the file gets copied to local directory where it saves it as you work, until you close and it checks it back in to the DMS. I'd like to be able to exclude that local directory from recent items. so local files in My Docs and Desktop would show up in recent items, but not DMS opened documents. Hope this makes sense.

    Read the article

  • Barnyard Service - MySQL Error

    - by SLYN
    I installed barnyard2 and saved as a service. When I run service barnyard2 start, Barnyard2 is failed. After I run tail -100 /var/log/messages and I encounter a fault like this. ERROR database: 'mysql' support is not compiled into this build of snort#012 Aug 22 11:52:06 barnyard2[25771]: FATAL ERROR: If this build of barnyard2 was obtained as a binary distribution (e.g., rpm,#012or Windows), then check for alternate builds that contains the necessary#012'mysql' support.#012#012If this build of barnyard2 was compiled by you, then re-run the#012the ./configure script using the '--with-mysql' switch.#012For non-standard installations of a database, the '--with-mysql=DIR'#012syntax may need to be used to specify the base directory of the DB install.#012#012See the database documentation for cursory details (doc/README.database).#012and the URL to the most recent database plugin documentation. Aug 22 11:52:06 barnyard2[25771]: Barnyard2 exiting What sould I do for solving this problem? When I installed Barnyard2, I used these commands: # ./configure --with-mysql --with-mysql-libraries=/usr/lib64/mysql # make ; make install (My System is CentOS 6.5 x86_64.)

    Read the article

  • Issues with returned mail sent to web-based email domains

    - by Beeder
    My company is having issues with returned mail that we send out to external domains. A few weeks ago we replaced a firewall and changed ISP providers and began subsequently having issues RECEIVING emails from external sources because we hadn't updated our new IPs in the DNS records. After making the necessary configuration changes and setting up SMTP forwarding over port 25 to our mail server, everything was working fine up until a few days ago when we started having mail sent out returned to us. We aren't having any trouble communicating internally (to recipients on our domain) but it seems we're having trouble with outbound messages to web-based email recipients. (@hotmail, @live, @yahoo, @gmail...etc) Currently we are running Server 2003 SP2 and exchange 2003. I'm very unfamiliar with configuring Exchange and could really use some help in narrowing down the possibilities. I did some research and am becoming suspicious of Sender ID being the culprit due to our recent IP address change and the likelihood that Sender ID is identifying us as a fake domain. Am I going in entirely the wrong direction? Any input or guidance would be infinitely appreciated. This is the message that is returned when an outbound message fails...this particular one was sent to my @live.com account for testing purposes... Your message did not reach some or all of the intended recipients. The following recipient(s) could not be reached: [email protected] on 5/17/2012 3:02 PM There was a SMTP communication problem with the recipient's email server. Please contact your system administrator. Unfortunately, messages from xx.x.xx.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. I tried a reverse DNS lookup and found that we are set up as a Forward-confirmed reverse DNS. So do I just need to contact my ISP and have them correct their DNS records or is this something I can solve on our end??

    Read the article

  • How can I sync Access databases and keep them up-to-date?

    - by user327472
    I have an Access database on my server. We split it up and use the front-end database for search data and adding new records or reports in local computer. If we update or add a new record, that writes to the back-end of database. I want to use this database in the other building with other servers. Also, those servers have no direct connection. How can I sync both back-end databases to keep the database data up to date? These details may be useful: It's a big amount of data - about 25,750 client records. I guess there are more than 25 tables at 80 MB.

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • connect to my database from another computer

    - by user3482102
    Sorry for being a :noob I am a student working on a dbms project on my laptop. I have installed mariadb and I have root access. Similarly the case with my friend's laptop. The problem is we both want to work on same database collectively as we are partners of a team in the project. How can I create a database in mariadb that we both can share? How to access that database? Please specify the software to use and implementation.

    Read the article

  • Why are my SharePoint downloads not completing for outside users?

    - by CT
    I am using WSS 3.0 with Microsoft Server 2003. I am running into the following problem. On a pretty frequent basis, outside users are having trouble downloading documents. Some downloads are completing while the download is still incomplete. So for instance, a PDF is a 17MB file. If I download it from within the office, all 17MBs are downloaded and it opens. If I download it from an outside connection, it may download anywhere from 5-10 MB of the file and then say it is complete. When these partial downloads are opened, it gives the user the error, this file is corrupt and cannot be repaired. I have solved this problem on some of the occasions by simply deleting the document and uploading a new copy of the document. This does not always work. Are there known bugs? Are the Internet settings that need to be modified on the outside user's machine? Does anyone else run into this?

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

  • SQL SERVER – Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video

    - by pinaldave
    One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. I have earlier written an article about this which describes how to resolve the errors which are related to SQL Server connection. Even though the step by step directions are pretty simple there are many first time IT Professional who are not able to figure out how to resolve this error. I have quickly built a video which is covering most of the solutions related to resolving the connection error. In the Fix SQL Server Connection Error article following workarounds are described: SQL Server Services TCP/IP Settings Firewall Settings Enable Remote Connection Browser Services Firewall exception of sqlbrowser.exe Recreating Alias Related Tips in SQL in Sixty Seconds: SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: ) SQL SERVER – Could not connect to TCP error code 10061: No connection could be made because the target machine actively refused it SQL SERVER – Connecting to Server Using Windows Authentication by SQLCMD SQL SERVER – Fix : Error: 15372 Failed to generate a ser instance od SQL Server due to a failure in starting the process for the user instance. The connection will be closed SQL SERVER – Dedicated Access Control for SQL Server Express Edition – An error occurred while obtaining the dedicated administrator connection (DAC) port. SQL SERVER – Fix : Error: 4064 – Cannot open user default database. Login failed. Login failed for user What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • HOUG konferencia 2010., kapunyitás ma!

    - by Fekete Zoltán
    MA KEZDODIK! A helyszínen még lehet regisztrálni, azaz a Ramada Hotel & Resort Lake Balaton szállodában. 2010. március 22-24 között találkozzunk Balatonalnádiban! A mai napon szakmai programokkal elkezdodik a HOUG Konferencia 2010. A magyarországi Oracle-felhasználók éves rendezvényén sok felhasználó számol be Oracle rendszerérol, tapasztalatairól, a rendszerek gazdasági hasznosságáról. A konferencia programja. - kedden az államigazgatási szekcióban a következo eloadást tartom: Ideális nagy teljesítményu hibaturo környezet felhasználási lehetoségei a kormányzati projektekhez - Oracle Exadata, Database Machine - szerdán az Üzleti intelligencia és adattárház szekció vezetpje leszek, továbbá fogok eloadást tartani a következo címmel: Az ideális OLTP és DW környezet az Oracle adatbázisoknak, Oracle Exadata, Database Machine Szerdán számos érdekes eloadást fogunk meghallgatni: - Management Excellence - az Oracle Hyperion EPM alkalmazásokkal Ribarics Pál - SZEZÁM - Üzleti intelligencia megoldások a Magyar Nemzeti Vagyonkezelo Zrt. életében Holl Zoltán - JD Edwards EnterpriseOne és Oracle BI EE, a Fornetti recept: lekvár a sütibe Bitter Tibor (E-best Kft.), Király János (Fornetti Kft.) - Tárházak a gázra lépve (új utak felé) Kránicz László (OTP Bank Nyrt.) - Oracle-Hyperion Interactive Reporting végfelhasználói, ad-hoc lekérdezo eszköz bevezetése a KSH-ban és a használat tapasztalatai Pap Imre (Központi Statisztikai Hivatal) - Az ideális OLTP és DW környezet az Oracle adatbázisoknak Fekete Zoltán (Oracle Hungary Kft.) - BI Suite bevezetés az MKB-Euroleasing-nél Mitró Péter (MKB Euroleasing Autóhitel Zrt.) - Essbase alapú tervezõ rendszer a Bay Zoltán Alkalmazott Kutatási Közalapítványnál Hoffman Zoltán (Bay Zoltán Alkalmazott Kutatási Közalapítvány), Szabó Gábor (R&R Software Zrt.) - Adattárház-megvalósítás Oracle alapokon a National Instrumentsnél Vágó Csaba, Németh Márk (National Instruments Hungary Kft.) - Banki adatpiac bevezetése adattárház alapokon Dési Balázs (HP Magyarország Kft.)

    Read the article

  • The Scoop: Oracle E-Business Suite Support on 64-bit Linux

    - by Terri Noyes
    This article addresses frequently asked questions about Oracle E-Business Suite (EBS) 64-bit Linux support.Q:  Which 64-bit Linux OSs are supported for EBS? A: Beginning with Release 12, we support the following 64-bit operating systems for both application and database tiers on x86-64 servers:Oracle Enterprise Linux Red Hat Enterprise Linux SUSE Linux Enterprise Server For EBS Release 11i (and again in Release 12), when the application tier is installed on a certified platform, additional platforms (including the above) may be used for a 64-bit database tier on x86-64 servers. This is an example of a mixed platform architecture (Release 12), or a Split Configuration (Release 11i). Q:  I understand that the EBS application tier code is 32-bit, even for the 64-bit Linux OS -- is this the case?A: It is true that the majority of executables provided as part of our release media on the application tier are 32-bit (as are the Fusion Middleware libraries and objects they depend on).  However, the 'Planning' products have large memory requirements and therefore are 64-bit compiled to take advantage of the larger memory space afforded by the 64-bit OS'es.

    Read the article

  • Looking into Entity Framework Code First Migrations

    - by nikolaosk
    In this post I will introduce you to Code First Migrations, an Entity Framework feature introduced in version 4.3 back in February of 2012.I have extensively covered Entity Framework in this blog. Please find my other Entity Framework posts here .   Before the addition of Code First Migrations (4.1,4.2 versions), Code First database initialisation meant that Code First would create the database if it does not exist (the default behaviour - CreateDatabaseIfNotExists). The other pattern we could use is DropCreateDatabaseIfModelChanges which means that Entity Framework, will drop the database if it realises that model has changes since the last time it created the database.The final pattern is DropCreateDatabaseAlways which means that Code First will recreate the database every time one runs the application.That is of course fine for the development database but totally unacceptable and catastrophic when you have a production database. We cannot lose our data because of the work that Code First works.Migrations solve this problem.With migrations we can modify the database without completely dropping it.We can modify the database schema to reflect the changes to the model without losing data.In version EF 5.0 migrations are fully included and supported. I will demonstrate migrations with a hands-on example.Let me say a few words first about Entity Framework first. The .Net framework provides support for Object Relational Mappingthrough EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach.In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework.This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext.Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class we can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests.DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First).Let's move on to our hands-on example.I have installed VS 2012 Ultimate edition in my Windows 8 machine. 1)  Create an empty asp.net web application. Give your application a suitable name. Choose C# as the development language2) Add a new web form item in your application. Leave the default name.3) Create a new folder. Name it CodeFirst .4) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place this class file in the CodeFirst folder.The code follows    public class Footballer     {         public int FootballerID { get; set; }         public string FirstName { get; set; }         public string LastName { get; set; }         public double Weight { get; set; }         public double Height { get; set; }              }5) We will have to add EF 5.0 to our project. Right-click on the project in the Solution Explorer and select Manage NuGet Packages... for it.In the window that will pop up search for Entity Framework and install it.Have a look at the picture below   If you want to find out if indeed EF version is 5.0 version is installed have a look at the References. Have a look at the picture below to see what you will see if you have installed everything correctly.Have a look at the picture below 6) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows     public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }             }    Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 7) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it , it will use a connection string in the web.config and will create the database based on the classes.I will use the name "FootballTraining" for the database.In my case the connection string inside the web.config, looks like this    <connectionStrings>    <add name="CodeFirstDBContext" connectionString="server=.;integrated security=true; database=FootballTraining" providerName="System.Data.SqlClient"/>                       </connectionStrings>8) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.csWe will create a simple public method to retrieve the footballers. The code for the class followspublic class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers select player;             return query.ToList();         }     } 9) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard.Build and Run your application.  10) Obviously you will not see any records coming back from your database, because we have not inserted anything. The database is created, though.Have a look at the picture below.  11) Now let's change the POCO class. Let's add a new property to the Footballer.cs class.        public int Age { get; set; } Build and run your application again. You will receive an error. Have a look at the picture below 12) That was to be expected.EF Code First Migrations is not activated by default. We have to activate them manually and configure them according to your needs. We will open the Package Manager Console from the Tools menu within Visual Studio 2012.Then we will activate the EF Code First Migration Features by writing the command “Enable-Migrations”.  Have a look at the picture below. This adds a new folder Migrations in our project. A new auto-generated class Configuration.cs is created.Another class is also created [CURRENTDATE]_InitialCreate.cs and added to our project.The Configuration.cs  is shown in the picture below. The [CURRENTDATE]_InitialCreate.cs is shown in the picture below  13) ??w we are ready to migrate the changes in the database. We need to run the Add-Migration Age command in Package Manager ConsoleAdd-Migration will scaffold the next migration based on changes you have made to your model since the last migration was created.In the Migrations folder, the file 201211201231066_Age.cs is created.Have a look at the picture below to see the newly generated file and its contents. Now we can run the Update-Database command in Package Manager Console .See the picture above.Code First Migrations will compare the migrations in our Migrations folder with the ones that have been applied to the database. It will see that the Age migration needs to be applied, and run it.The EFMigrations.CodeFirst.FootballeDBContext database is now updated to include the Age column in the Footballers table.Build and run your application.Everything will work fine now.Have a look at the picture below to see the migrations applied to our table. 14) We may want it to automatically upgrade the database (by applying any pending migrations) when the application launches.Let's add another property to our Poco class.          public string TShirtNo { get; set; }We want this change to migrate automatically to the database.We go to the Configuration.cs we enable automatic migrations.     public Configuration()        {            AutomaticMigrationsEnabled = true;        } In the Page_Load event handling routine we have to register the MigrateDatabaseToLatestVersion database initializer. A database initializer simply contains some logic that is used to make sure the database is setup correctly.   protected void Page_Load(object sender, EventArgs e)        {            Database.SetInitializer(new MigrateDatabaseToLatestVersion<FootballerDBContext, Configuration>());        } Build and run your application. It will work fine. Have a look at the picture below to see the migrations applied to our table in the database. Hope it helps!!!  

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >