Search Results

Search found 11197 results on 448 pages for 'related'.

Page 36/448 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How do I remove Slony from a restored PostgreSQL database?

    - by Scott Herbert
    I've restored a database which came from a server on which Slony was running. The server on which the database has been restored does not have Slony installed. When the database restored, there were a lot of errors reported, with Slony related objects not getting created due to Slony related logins being missing. This I thought was not a problem, as losing the Slony objects didn't seem to matter, and infact seemed desirable. However, now I've got an anoying, if not critical problem. Whenever one clicks on a table in the newly restored DB in PGAdmin, a Slony related error popup ... pops up. The first one reads: "An error has occured: ERROR: function _rmscl.getlocalnodeid(unknown) does not exist" I notice that under the Replication node in PGAdmin, that there is a Slony replication cluster. Trying to drop this cluster results in more object missing type errors. Does anyone have any ideas how we can remove the last vestiges of Slony from this database?

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by skriatok
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start Active ruleset: bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 38 packets, 2228 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 844 542K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1158 111K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Active ruleset: (after editing iptables into below sugested form) bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 2567 packets, 172K bytes) pkts bytes target prot opt in out source destination 49 4157 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 412K 441M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2567 172K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 DROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 312K packets, 25M bytes) pkts bytes target prot opt in out source destination ping and syslog simultaneous screenshots from phone (pinger) and from laptop (being pinged) http://dl.dropbox.com/u/4160051/slckwr/pingfrom%20mobile.jpg http://dl.dropbox.com/u/4160051/slckwr/tailsyslog.jpg

    Read the article

  • Iptables rules make communication so slow

    - by mmc18
    When I have send a request to an application running on a machine which following firewall rules are applied, it waits so long. When I have deactivated the iptables rule, it responses immediately. What makes communication so slow? -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -i ppp+ -j ACCEPT -A INPUT -p udp -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m udp --dport 4500 -j ACCEPT -A INPUT -p udp -m udp --dport 1701 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i lo -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A FORWARD -i ppp+ -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT

    Read the article

  • Generating Java classes out of XMLSchema.xsd using JAXB

    - by Christian Schulz
    I'm using jaxb to generate java classes out of a xml schema. The schema imports XMLSchema.xsd and its content is used as an element in the document. If I remove the import and the reference to "xsd:schema" respectively then the binding compiler generates successfully the classes. If I do not then it would produce the following errors, which are the same if I would try to generate Java classes from the XMLSchema.xsd only! C:\Users\me"%JAXB%/xjc" -extension -d tmp/uisocketdesc -p uis.jaxb uisocketdesc.xsd -b xml_binding_test.xml -b xml_binding_test_2.xml -b xml_binding_test_3.xml parsing a schema... compiling a schema... [ERROR] A class/interface with the same name "uis.jaxb.ComplexType" is already in use. Use a class customization to resolve this conflict. line 612 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Relevant to above error) another "ComplexType" is generated from here. line 440 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] A class/interface with the same name "uis.jaxb.Attribute" is already in use. Use a class customization to resolve this conflict. line 364 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Relevant to above error) another "Attribute" is generated from here. line 1020 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] A class/interface with the same name "uis.jaxb.SimpleType" is already in use. Use a class customization to resolve this conflict. line 2278 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Relevant to above error) another "SimpleType" is generated from here. line 2222 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] A class/interface with the same name "uis.jaxb.Group" is already in use. Use a class customization to resolve this conflict. line 930 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Relevant to above error) another "Group" is generated from here. line 727 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] A class/interface with the same name "uis.jaxb.AttributeGroup" is already in use. Use a class customization to resolve this conflict. line 1062 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Relevant to above error) another "AttributeGroup" is generated from here. line 1026 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] A class/interface with the same name "uis.jaxb.Element" is already in use. Use a class customization to resolve this conflict. line 721 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Relevant to above error) another "Element" is generated from here. line 647 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] Two declarations cause a collision in the ObjectFactory class. line 1020 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Related to above error) This is the other declaration. line 364 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] Two declarations cause a collision in the ObjectFactory class. line 2278 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Related to above error) This is the other declaration. line 2222 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] Two declarations cause a collision in the ObjectFactory class. line 930 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Related to above error) This is the other declaration. line 727 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] Two declarations cause a collision in the ObjectFactory class. line 440 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Related to above error) This is the other declaration. line 612 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] Two declarations cause a collision in the ObjectFactory class. line 1026 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Related to above error) This is the other declaration. line 1062 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] Two declarations cause a collision in the ObjectFactory class. line 647 of "http://www.w3.org/2001/XMLSchema.xsd" [ERROR] (Related to above error) This is the other declaration. line 721 of "http://www.w3.org/2001/XMLSchema.xsd" Failed to produce code.

    Read the article

  • Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E

    - by Sebastian Bugiu
    I had Ubuntu 11.10 and in the last few weeks I experienced an obscure problem: after I had the computer running for a few days I could no longer connect to google.com or anything related to google. All sites worked with all browsers (Firefox, chrome, opera) except google. It remained in the connecting phase for a few minutes and either timed out or finally connected with this huge delay. Even if I entered other sites such as this one, if it had anything to do with google such adsense or gstatic or whatever with g in it, that site took a long time to load waiting in connecting to gstatic.com . Anything google related took minutes to work, but everything else worked instantly! I tried rebooting or using other machine(with windows on it) and this worked, so it's not network related. But after a few days it started not working again... So I upgraded to the Precise Pangolin hoping this behavior would go away. It didn't! After a few days I get the same behavior as in 11.10. What am I supposed to do? Reboot every other day? I didn't have this problem with neither 10.10 or 11.04. I found the Realtek RTL8168/8111E issue with the r8169 driver but this is not exactly the same card so probably trying r8168 won't help. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Toshiba America Info Systems Device ff1c Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at 4000 [size=256] Memory at d0010000 (64-bit, prefetchable) [size=4K] Memory at d0000000 (64-bit, prefetchable) [size=64K] Capabilities: [40] Power Management version 7 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Count=2 Masked- Capabilities: [cc] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 09-00-00-00-ff-ff-00-00 Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • IASA South East Florida Chapter February Meeting Report

    - by Rainer Habermann
    IASA South East Florida Chapter – February Meeting The topic for our February chapter meeting was Legal Issues in IT. Ms. Kennedy, Intellectual Property Attorney with an active litigation, trademark and copyright practice, presented: How Google, Wal-Mart & Apple Make their Millions – The Secret Ingredient: Intellectual Property This topic initiated great interest and the meeting room at Microsoft Ft. Lauderdale filled up to the last seat. Most Architects, Engineers, and MBA’s are not aware about Intellectual Property, Basic Patent, Trademark, or legal issues related to the web. After clarifying the basic definitions, Ms. Kennedy explained in detail how intellectual property issues could make or break a company. Members had the opportunity at the end of the presentation to ask questions, discuss legal problems, and several members shared their experiences related to Intellectual Property and other IT related issues. If you want to protect your ideas and intellectual property, you have to be aware of the implications and need to take the right steps in order to protect them. All Chapter Members agreed that it was an outstanding and lively presentation. Ms. Kennedy presented high quality content and made participants aware of legal IT issues. In the name of all chapter members, thank you Ms. Kennedy for taking the time for this amazing presentation and to Quent Herschelman for hosting the meeting. Rainer Habermann President IASA South East Florida Chapter

    Read the article

  • New Whitepaper: Best Practices for Gathering EBS Database Statistics

    - by Elke Phelps (Oracle Development)
    Most Oracle Applications DBAs and E-Business Suite users understand the importance of accurate database statistics.  Missing, stale or skewed statistics can adversely affect performance.  Oracle E-Business Suite statistics should only be gathered using FND_STATS or the Gather Statistics concurrent request. Gathering statistics with DBMS_STATS or the desupported ANALYZE command may result in suboptimal executions plans for E-Business Suite. Our E-Business Suite Performance Team has been busy implementing and testing new features for gathering statistics using FND_STATS in Oracle E-Business Suite databases.  The new features and guidelines for when and how to gather statistics are published in the following whitepaper: Best Practices for Gathering Statistics with Oracle E-Business Suite (Note 1586374.1) The new white paper details the following options for gathering statistics using FND_STATS and the Gather Statistics concurrent request:: History Mode - backup existing statistics prior to gather new statistics GATHER_AUTO Option - gather statistics for tables based upon % change Histograms - collect statistics for histograms AUTO Sampling - use the new FND_STATS feature that supports the AUTO option for using AUTO sample size Extended Statistics - use the new FND_STATS feature that supports the creation of column groups and automatic statistics collection on the column groups when table statistics are gathered Incremental Statistics - gather incremental statistics for partitioned tables The new white paper also includes examples and performance test cases for the following: Extended Optimizer Statistics Incremental Statistics Gathering Concurrent Statistics Gathering This white paper includes details about the standalone Oracle E-Business Suite Release 11i and 12 patches that are required to take advantage of this new functionality. Your feedback is welcome We would be very interested in hearing about your experiences with these new options for gathering statistics.  Please feel free to post your comments here or drop us a line privately.Related Oracle OpenWorld 2013 Session Getting Optimal Performance from Oracle E-Business Suite (CON8485) Related My Oracle Support Notes Collecting Statistics with Oracle EBS 11i and R12 (Note 368252.1) Non-EBS Related Blogs, White Papers and My Oracle Support Notes  Oracle Optimizer Blog Understanding Optimizer Statistic (white paper) Fixed Objects Statistics(GATHER_FIXED_OBJECTS_STATS) Considerations (Note 798257.1)

    Read the article

  • Election 2012: Twitter Breaks Records with MySQL

    - by Bertrand Matthelié
    Twitter VP of Infrastructure Operations Engineering Mazen Rawashdeh shared news and numbers yesterday on his blog: "Last night, the world tuned in to Twitter to share the election results as U.S. voters chose a president and settled many other campaigns. Throughout the day, people sent more than 31 million election-related Tweets (which contained certain key terms and relevant hashtags). And as results rolled in, we tracked the surge in election-related Tweets at 327,452 Tweets per minute (TPM). These numbers reflect the largest election-related Twitter conversation during our 6 years of existence, though they don’t capture the total volume of all Tweets yesterday." "Last night, Twitter averaged about 9,965 TPS from 8:11pm to 9:11pm PT, with a one-second peak of 15,107 TPS at 8:20pm PT and a one-minute peak of 874,560 TPM. Seeing a sustained peak over the course of an entire event is a change from the way people have previously turned to Twitter during live events. Now, rather than brief spikes, we are seeing sustained peaks for hours." Congrats to Jeremy Cole, Davi Arnaut and the rest of the team at Twitter for their excellent work! Jeremy recently held a keynote presentation at MySQL Connect describing how MySQL powers Twitter, and why they chose and continue to rely on MySQL for their operations. You can watch the presentation here. He also went into more details during another presentation later that day and you can access the slides here. Below a couple of tweets from Jeremy after what have surely been hectic days...  Keep up the good work guys!

    Read the article

  • Oracle Linux at DOAG 2012 Conference in Nuremberg, Germany (Nov 20th-22nd)

    - by Lenz Grimmer
    This week, the DOAG 2012 Conference, organized by the German Oracle Users Group (DOAG) takes place in Nuremberg, Germany from Nov. 20th-22nd. There will be several presentations related to Oracle Linux, Oracle VM and related infrastructure (including a dedicated MySQL stream on Tue+Wed). Here are a few examples picked from the infrastructure stream of the schedule: Tuesday, Nov. 20th 10:00 - Virtualisierung, Cloud und Hosting - Kriterien und Entscheidungshilfen - Harald Sellmann, its-people Frankfurt GmbH, Andreas Wolske, managedhosting.de GmbH 14:00 - Virtual Desktop Infrastructure Implementierungen und Praxiserfahrungen - Björn Rost, portrix Systems GmbH 15:00 - Oracle Linux - Best Practices und Nutzen (nicht nur) für die Oracle DB - Manuel Hoßfeld, Lenz Grimmer, Oracle Deutschland 16:00 - Mit Linux Container Umgebungen effizient duplizieren - David Hueber, dbi services sa Wednesday, Nov. 21st 09:00 - OVM 3 Features und erste Praxiserfahrungen - Dirk Läderach, Robotron Datenbank-Software GmbH 09:00 - Oracle VDI Best Practice unter Linux - Rolf-Per Thulin, Oracle Deutschland 10:00 - Oracle VM 3: Was nicht im Handbuch steht... - Martin Bracher, Trivadis AG 12:00 - Notsystem per Virtual Box - Wolfgang Vosshall, Regenbogen AG 13:00 - DTrace - Informationsgewinnung leicht gemacht - Thomas Nau, Universität Ulm 13:00 - OVM x86 / OVM Sparc / Zonen und co. - Bertram Dorn, Oracle Deutschland Thursday, Nov. 22nd 09:00 - Oracle VM 3.1 - Wie geht's wirklich? - Manuel Hoßfeld, Oracle Deutschland, Sebastian Solbach, Oracle Deutschland 13:00 - Unconference: Oracle Linux und Unbreakable Enterprise Kernel - Lenz Grimmer, Oracle Deutschland 14:00 - Experten-Panel OVM 3 - Björn Bröhl, Robbie de Meyer, Oracle Corporation 14:00 - Wie patcht man regelmäßig mehrere tausend Systeme? - Sylke Fleischer, Marcel Pinnow, DB Systel GmbH 16:00 - Wo kommen denn die kleinen Wolken her? OVAB in der nächsten Generation - Marcus Schröder, Oracle Deutschland On a related note: if you speak German, make sure to subscribe to OLIVI_DE - Oracle LInux und VIrtualisierung - a German blog covering topics around Oracle Linux, Virtualization (primarily with Oracle VM) as well as Cloud Computing using Oracle Technologies. It is maintained by Manuel Hoßfeld and Sebastian Solbach (Sales Consultants at Oracle Germany) and will also include guest posts by other authors (including yours truly).

    Read the article

  • Explaining the difference between OData & RDF by way of analogy

    - by jamiet
    A couple of months back I wrote a blog post entitled Microsoft, OData and RDF where I gave a high level view of the OData protocol and how it compares to RDF. I talked about linked data, triples and such like which may have been somewhat useful however jargon-heavy. Earlier today Dr Michael Hausenblas (blog | twitter) offered an analogy which I think is probably more useful and with Michael's permission I'm re-posting it here:Imagine a Web (a Web of Documents, if you wish), which is not based on HTML and hyperlinks, but on MS Word documents. The documents are all available on the Internet, so you can download them and consume the content. But after you’re done with a certain document that talks about a book, how do you learn more about it? For example, reviews about the book or where you can purchase it? Maybe the original document mentions that there is some more related information on another server. So you’d need to go there and look for the related bit of information yourself. You see? That’s what the Web is great at – you just click on a hyperlink and it takes you to the document (or section) you’re interested in. All the legwork is taken care of for you through HTML, URIs and HTTP.Hm, right, but how is this related to OData? Well, OData feels a bit like the above mentioned scenario, just concerning data. Of course you – well actually rather a software program I guess – can consume it (a single source), but that’s it.from Oh – it is data on the Web by Michael Hausenblas I believe that OData has loads of use cases but its important to understand its limitations as well and I think Michael has done a good job of explaining those limitations.@Jamiet   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What is the correct way to deal with similar but independent features?

    - by Koviko
    Let's say we have a feature request come in and we begin work on it, which we'll call feature-1. It introduces some new logic to the application, which we'll call logic-A and logic-B. A programmer branches from the release branch and begins work on the feature. Soon after, we get another feature request, which we'll call feature-2. It will implement logic-A and logic-C into the application. The logic A being implemented by this feature is the same logic-A as was implemented in feature-1. Let's also say that given logic-B, logic-A might be implemented slightly differently than it would have been given logic-C, and also differently given both logic-B and logic-C (eg. with only one feature, the code would be less flexible than with both). How should this situation be handled? Concrete Example (to help with any confusion in my wording) feature-1 is a feed from programmers.stackexchange.com. feature-2 is a feed from gaming.stackexchange.com. logic-A is the implementation of a feed at all (assuming the application currently has no feeds), which links to the content as well and gives related information. logic-B is that the feed's source is from programmers.stackexchange.com. Adds to logic-A that the related programming language is displayed. logic-C is that the feed's source is from gaming.stackexchange.com. Adds to logic-A that the related game's name and box art is displayed.

    Read the article

  • Email Alias [email protected] Replaced with New Oracle Certification Support Tool

    - by Paul Sorensen
    All Oracle Certification customer service issues previously sent to [email protected], [email protected], [email protected], or [email protected], should now be submitted as service requests via the new request tool. Support via these email aliases ends today. Managing candidate communications via this tool will enable better issue tracking capabilities and ensure that all issues are handled quickly and efficiently. The integrated tool will also help us to more easily research historical and related issues to enable improved certification communications and business processes. For now, questions related to Java, Oracle Solaris (Cluster), MySQL, NetBeans or OpenOffice.org exam or certification, will still be sent to [email protected] and resolved via email. Questions related to the status of an Oracle Certification Success Kit, will still be sent to [email protected] and resolved via email. ?We are excited about this new offering and ?c?o?n?t?i?n?u?e? ??t?o??????? ?w?o?r?k? ?t?o?w?a?r?d ?improve?d customer ?s?e?r?v?i?c?e?? for our OCP community. Thank you for your cooperation! Quick View of Oracle Certification Customer Support Oracle Certification Support: All issues that previously would have been sent to [email protected] [email protected]: All questions on Java, Oracle Solaris (Cluster), MySQL, NetBeans, OpenOffice.org exams and certifications [email protected]: All questions on the status of your Oracle Certification Success Kit

    Read the article

  • Oracle ADF at Oracle OpenWorld 2012

    - by Shay Shmeltzer
    This year is going to be very busy for Oracle ADF developers who'll attend Oracle Open World. Check out the list of Oracle ADF related sessions, labs, demos and other Oracle ADF activities.  This list will help you not to miss any ADF related activity. We have over 50 ADF related sessions, multiple labs including new ones on ADF Mobile, Application Life Cycle Management and ADF in Eclipse, we'll have several demo booths where you can meet product managers, and we'll be featured in several keynotes as well. While we have several "beginners" sessions, you'll find that we have a lot of in-depth technical sessions and sessions that cover best-practices too. Of course, it is not just us product managers presenting about Oracle ADF, there are a lot of Oracle ADF sessions presented by customers, Oracle ACEs, and other developers. So you can learn from the experience of real life implementations. Note that the ADF content starts early on Sunday with a full set of Oracle ADF sessions arranged for you by the Oracle ADF Enterprise Methodology Group - so plan your trip accordingly and be there early Sunday morning. First thing on Monday morning, don't miss the keynote for Oracle ADF developers at 10:45 at the Marriott Marquis - Salon 8 - "The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud". We are also arranging a meet-up of developers using Oracle ADF at the OTN Lounge on Wed at 4:30pm - and we would love to meet you there - this will also give you an opportunity to meet other Oracle ADF users and members of the community. And after that we can all head over to the big Wed party to see Pearl Jam and Kings of Leon. One recommendation for those who are already registered - start planning your schedule and booking your place in the sessions now through the schedule builder. This will guarantee that you won't be left out of sessions you want to attend due room size limitations. Oracle OpenWorld 2013 will be a must attend event for serious Oracle ADF developers - don't miss it.

    Read the article

  • Big Data – Final Wrap and What Next – Day 21 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored various resources related to learning Big Data and in this blog post we will wrap up this 21 day series on Big Data. I have been exploring various terms and technology related to Big Data this entire month. It was indeed fun to write about Big Data in 21 days but the subject of Big Data is much bigger and larger than someone can cover it in 21 days. My first goal was to write about the basics and I think we have got that one covered pretty well. During this 21 days I have received many questions and answers related to Big Data. I have covered a few of the questions in this series and a few more I will be covering in the next coming months. Now after understanding Big Data basics. I am personally going to do a list of the things next. I thought I will share the same with you as this will give you a good idea how to continue the journey of the Big Data. Build a schedule to read various Apache documentations Watch all Pluralsight Courses Explore HortonWorks Sandbox Start building presentation about Big Data – this is a great way to learn something new Present in User Groups Meetings on Big Data Topics Write more blog posts about Big Data I am going to continue learning about Big Data – I want you to continue learning Big Data. Please leave a comment how you are going to continue learning about Big Data. I will publish all the informative comments on this blog with due credit. I want to end this series with the infographic by UMUC. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • How do I express subtle relationships in my data?

    - by Chuck H
    "A" is related to "B" and "C". How do I show that "B" and "C" might, by this context, be related as well? Example: Here are a few headlines about a recent Broadway play: 1 - David Mamet's Glengarry Glen Ross, Starring Al Pacino, Opens on Broadway 2 - Al Pacino in 'Glengarry Glen Ross': What did the critics think? 3 - Al Pacino earns lackluster reviews for Broadway turn 4 - Theater Review: Glengarry Glen Ross Is Selling Its Stars Hard 5 - Glengarry Glen Ross; Hey, Who Killed the Klieg Lights? Problem: Running a fuzzy-string match over these records will establish some relationships, but not others, even though a human reader could pick them out from context in much larger datasets. How do I find the relationship that suggests #3 is related to #4? Both of them can be easily connected to #1, but not to each other. Is there a (Googlable) name for this kind of data or structure? What kind of algorithm am I looking for? Goal: Given 1,000 headlines, a system that automatically suggests that these 5 items are all probably about the same thing. To be honest, it's been so long since I've programmed I'm at a loss how to properly articulate this problem. (I don't know what I don't know, if that makes sense). This is a personal project and I'm writing it in Python. Thanks in advance for any help, advice, and pointers!

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Boot failure randomly. Can someone help?

    - by desgua
    I often get stuck at the boot on battery. I can suspend and hibernate without any trouble (even on battery). If I plug the energy cable everything goes right most of the time (edited June 13). Adding "acpi=off" doesn't solve the issue. Disabling power save from LAN at BIOS doesn't solve this too. Memory test seems to be ok: I can even compile a kernel without a problem: Has someone a tip for this? (I have the same questions marks in my mind as the image shows if not more :-/ ) obs.: this is an upgraded installation from Natty alpha. edited I (May 22): I've installed a brand new final 11.04 and got the bug again. edited II (June 13): I thought it was solved but it is not. After spending eight days off, the same problem happened even with the power cable connected. I had to reboot about 6 times until success. edited III (June 14): I can boot (even with battery) if I disconnect the cable and take off the battery for a few seconds. This lead me to conclude that maybe something is kept into memory of some hardware. Maybe or maybe not related but I also have touchpad issues (jumps) with this machine. edited IV (June 25): I opened gconf-editor and went to apps gnome-power-manager and disabled everything possibly related to suspend or hibernate. No help. Also I grabbed a Fedora live usb pendrive and got the same problem: So I think it is a hardware problem related issue.

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • BI Publisher : Formatting Issues

    - by Manoj Madhusoodanan
    While creating BI Publisher reports the formatting issues are quite common.Here I am discussing some common issues related to BIP report development. 1) First issue is related to column formatting.When you want to display some data which has leading zeros or trailing zeros after '.' in EXCEL output you will not get the desired output.But in PDF it will come as what you are expecting.This is not with the issue of your data. This is due to the unique nature of EXCEL cell format.When you are trying to put a text data in a cell with out making any change to cell format it will treat as number and it will truncate all leading zeros and all trailing zeros after '.' . So what you have to do is to convert that data into a format which EXCEL can treat as text. Eg: If you want to display 0020100 convert this data into ="0020100". Same way for 23789.02300 to ="23789.02300".   Note: This is applicable to EXCEL output only.If you have multiple output type apply it only for EXCEL. 2) Second is related to report size issue in PDF output type.If the number of columns are more and if you want to show most of the columns in one row andif it is a PDF output you can choose the paper size as Legal (8.5 x 14''). You will get more spaces in the template to accommodate more columns. 3) If your XML data contains special characters like &,<,> etc ..  pass the data to DBMS_XMLGEN.CONVERT function.It will replace special characters with corresponding XML notations. Eg: (a>b) & (c!=d) to  (a&gt;b) &amp; (c!=d)

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • SQL DB design to support user feeds (in application like facebook)

    - by Yoav
    I have a social network server with a MySql DB. I want to show the users feeds like done in Facebook. Example - UserX now Friend with userY, userX did like on postX etc. Currently I have table: C1 : UserId C2 : LogType (now friend, did like etc) C3 : ObjectId (Can be userId or postId) - set depending on the LogType. Currently to get all related logs to show to the user I do the following queries: 1. Get All user Friends userIds 2. Query all rows which C1 is in userIds (I query completed) 3. Scan the DB and see - if LogType equals DidLike, check if post's OwnerId is the userId - if yes add it to logs. And so on. Obvious this is not efficient at all. I am looking for a better way. I thought I had in mind: Create a new table (in addition to the Log table) C1 : UserId C2 : LogId (from Log table) C3 : UserID of the one who did the action When querying logs - look in the table and get related Logs (by LogId) from LogTable. Updating the table: Whenever user doing action that should be in the log: 1. Add the Log entry to LogTable. 2. Scan the DB and see which users are interested with the Log (Who my friends are, Who is the owner of the post) and add related entries to the new table. (must be done in BG). 3. If user UNFRIEND another user - then look in the logs for all rows where C3 == UNFRIENDED user id and delete them. Any opinions? Other suggestions?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >