Search Results

Search found 38522 results on 1541 pages for 'single source'.

Page 81/1541 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • getUserPrincipal() in JCIFS / Lan-Manager authentitation level setting in Windows 2k8

    - by Chris
    I have to find out in which exact format JCIFS stores the user principal in the "getUserPrincipal()" property. Therefor i created a test Environment like this: Windows Server 2008 Domain Controller Domain named "MYDOMAIN" Many Testusers in Active Directory Tomcat Application Server with my Web Application (which simply reads the user Principal and displays its values). The user should be logged in to the web-application with SSO therefor i need the format that jcifs is using to store the user. (For example user@MYDOMAIN or MYDOMAIN\user...) I tested the Authentication with other SSO frameworks with Kerberos Method and it works as expected. I'm now trying to use SSO through the NTLMHttpFilter of JCIFS. When i try to login i get the following error message: jcifs.smb.SmbException: The parameter is incorrect. jcifs.smb.SmbTransport.checkStatus(SmbTransport.java:541) jcifs.smb.SmbTransport.send(SmbTransport.java:641) jcifs.smb.SmbSession.sessionSetup(SmbSession.java:322) jcifs.smb.SmbSession.send(SmbSession.java:224) jcifs.smb.SmbTree.treeConnect(SmbTree.java:176) jcifs.smb.SmbSession.logon(SmbSession.java:153) jcifs.smb.SmbSession.logon(SmbSession.java:146) jcifs.http.NtlmHttpFilter.negotiate(NtlmHttpFilter.java:189) jcifs.http.NtlmHttpFilter.doFilter(NtlmHttpFilter.java:121) Regarding to the documentation i'm using to configure this, this is a know issue with the Group policy. It is stated there, that i have to change the Group policy "Networkaccess: lan-manager authentication level" to respond to NTLMv1 request. I have done this, but it's still not working. So what i also have to configure is the same policy on the client computer. I have to change the policy, so that the client computer sends NTLMv1. But it is always sending NTLMv2 tokens. The problem now is that i'm somehow not able to change this setting. (I already was before) because the dropdown box to choose the authentication method is "greyed out". edit: just to make this clear, this dialog is on the client-side in the "local-security policies" As you can see from this screenshot, the chosen method is "Only send NTLMv2 responses" which is the wrong setting and i'm pretty sure that this is causing the error above. My question is now, why can't i change this setting? Why is it greyd out?

    Read the article

  • Can you recommend a robust OpenAPI 2.0 provider?

    - by larsks
    Help me find a robust OpenID 2.0 provider! We're looking at various SSO solutions for our organization, and I would like to suggest OpenID as a viable option, since (a) there is good consumer support in a number of web applications, and (b) it's simpler to implement than Shibboleth, which is the alternative technology. However, this requires that we find a robust OpenID provider, ideally one meeting the 2.0 specification. The only solutions I've come across so far are: Atlassian Crowd This looks great, although the $4000 price tag may make it a tough sell. Community-ID This looks like an interesting idea, but I'm not sure the project quality is at a suitable level (yet). In particular, it's not clear if LDAP support actually works (which will be a requirement in our environment). Have you implemented OpenID in your environment? What are you using? Have you selected an alternative SSO technology?

    Read the article

  • open source solution to a gateway for a network of a housing cooperative of 150 people

    - by SirDinosaur
    i just inherited a barely functioning network for a student housing cooperative of about 150 people. in it's current state, as i understand it from the previous person in charge of the network, we have working wireless access points and working ethernet cords going to working gigabit switches going to a barely functioning gateway (right now a simple home router) to one of three possible outbound connections. it is possible to connect to the network through the wireless or ethernet, but especially during peak hours, packets / connections are likely dropped or otherwise get no response. my intuition tells me to replace the gateway with something that can handle multiple outbound connections (WAN) and one inbound connection (LAN), while the rest of the network seems suitable for now. i'm somewhat knowledgable in Linux (been using Debian after first Arch Linux) and i want to use as much open source as possible, but i'm confused whether or not a simple server that i could easily understand will work for this situation. do i need specialized hardware to handle the switching more effectively? if so, what are my options? (i found this, thoughts?) or if a Debian server would work, anything else i should about the specs required for this type of server? also links to any useful information on using open source to maintain this type of network would be most appreciated. <3 P.S. crossposted http://redd.it/yybp2.

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • Uninstall php5 installed from source

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Adding third disk as a single disk in a server with an existing RAID1

    - by slowhandsolo
    I've got a ProLiant DL360 G5 server (Fedora 13) with two SAS disks in a hardware RAID 1, working fine. Now I hot plugged another SAS disk. I'd like to configure this new hard disk out of my RAID, as a single non-RAID disk (ex. /dev/sdb). Even after rebooting the server, I can't see the new disk with "fdisk -l". It displays only my hardware RAID, but not the new disk. [root@myserver]# fdisk -l Disco /dev/cciss/c0d0: 300.0 GB, 299966445568 bytes Disposit. Inicio Comienzo Fin Bloques Id Sistema /dev/cciss/c0d0p1 * 1 126 512000 83 Linux /dev/cciss/c0d0p2 126 71798 292422656 8e Linux LVM Disco /dev/dm-0: 234.9 GB, 234881024000 bytes Disco /dev/dm-1: 10.5 GB, 10536091648 bytes Disco /dev/dm-2: 21.0 GB, 20971520000 bytes Disco /dev/dm-3: 31.5 GB, 31474057216 bytes Disco /dev/dm-4: 1577 MB, 1577058304 bytes However, I can see the new disk using the HP Array Configuration Utility CLI for Linux "hpacucli": [root@myserver]# hpacucli => controller slot=0 physicaldrive all show status physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 300 GB): OK physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 300 GB): OK physicaldrive 1I:1:3 (port 1I:box 1:bay 3, 300 GB): OK => controller slot=0 pd all show detail Smart Array P400i in Slot 0 (Embedded) array A physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 **unassigned** physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: **Unassigned Drive** As you can see, I've got two SAS disks in a RAID 1 and the new disk as "unassigned". Is there any way to work with the new disk as another non-RAID single disk? If relevant, I want to create a new partition in my new disk, format it with mkfs and mount it, but as I can't see it with fdisk, I don't know how to do it. Thanks!

    Read the article

  • postfix smtp_fallback_relay for deferred messages to a single domain

    - by EdwardTeach
    I use Postfix to send messages to a mail server outside my organization which frequently rejects/defers my mail. My Postfix server sees that these messages are deferred and tries again, eventually getting through. Final delivery can take up to an hour, which makes my users unhappy. In comparison, mail from my Postfix server to other hosts works normally. I have now found out about a second, unofficial MX for this domain that does not reject/defer mail. This second MX does not appear when doing a DNS MX query for the domain. Therefore, for the problem domain I would like to use this second MX as a fallback. That is: whenever mail is deferred by the primary MX, try again on the unofficial second MX. I see that there is already a postfix configuration "smtp_fallback_relay". However the documentation seems to indicate that I can not restrict usage of the fallback to a single domain. The documentation also doesn't mention deferred message handling. So is there a way to configure a single-domain, deferred-retry fallback host in Postfix? For reference, I am including my postconf output (the host names and ip addresses are fake): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/etc/postfix/legacy_mailman, ldap:/etc/postfix/ldap-aliases.cf append_dot_mydomain = no biff = no config_directory = /etc/postfix default_destination_concurrency_limit = 2 inet_interfaces = all inet_protocols = all local_destination_concurrency_limit = 2 local_recipient_maps = $alias_maps mailbox_size_limit = 0 mydestination = myhost.my.network, localhost.my.network, localhost, my.network myhostname = myhost.my.network mynetworks = 127.0.0.0/8, [::ffff:127.0.0.0]/104, [::1]/128, 10.10.10.0/24 myorigin = my.network readme_directory = no recipient_delimiter = + relay_domains = $mydestination relayhost = smtp_fallback_relay = the.problem.host smtp_header_checks = smtpd_banner = $myhostname ESMTP $mail_name virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Windows Server 2003 - passwordless access to \\myhost\ but not \\myhost.mydomain.net\

    - by Charles Duffy
    I have a Windows Server 2003 system on which passwordless access to local UNC paths is possible using the server's unqualified hostname or its IP address, but not via its FQDN -- even when the hosts file is used to map that FQDN directly to 127.0.0.1. That is: \\127.0.0.1\ - passwordless \\myhost\ - passwordless \\myhost.mydomain.com\ - brings up an authentication dialog Unfortunately, I have a local application trying to resolve UNC paths including the host's FQDN. I've tried resolving myhost.mydomain.com to 127.0.0.1 in both hosts and lmhosts, and calling ping myhost.mydomain.com at the command prompt gives the appearance that this resolution has taken effect; even so, attempting to open \\myhost.mydomain.com\ from Windows Explorer brings up a password prompt, while \\127.0.0.1\ does not. The system is using an OpenDirectory server (Apple's Kerberos+LDAP directory service) for authentication.

    Read the article

  • Adding subnet to a vsphere with single vcenter and esxi host

    - by Ilya Rakhlin
    Let me start of by saying that I do not specialize in networking, I am in the process of adding additional VMs to a testing environment and wanted some recommendations. In this case I am running a single ESXI 5.1 host and a single Vcenter management server. The problem is, I need another range of IP addresses added to the existing setup; hopefully without reconfiguring everything. Currently the esxi host is configured to IP: 192.168.100.200, gateway: 192.168.100.1 and subnet: 255.255.255.0. All of the VMs are running some version of linux with hard coded IP addresses in that range, and using that subnet. The VMs I am about to deploy I want to be on the 192.168.101.X network. Is it possible to add an additional subnet to this existing system that will also communicate with the current subnet? The esxi host has 6 physical NICs but only one connected as it is only a testing system; not sure if that matters. Are there any other ways to accomplish this hopefully without restarting or at least reconfiguring the IP addresses for each VM? Reason: Due to the configuration of the VMs to run the applications that we need I am using a large amount of the current IPs in the potential range (mostly VIPs). I will be setting up a new version of this “environment” while keeping the old one, thus potentially running out of IP addresses.

    Read the article

  • LDAP Account Locked Out Sporadically after Password change - Finding the source of invalid attempts

    - by CityView
    On a small network of machines (<1000) we have a user whose account is being locked out after an indeterminate interval following a password change. We are having severe difficulties finding the source of the invalid logon attempts and I would appreciate it greatly if some of you could go through your thought process and the checks you would perform in order to fix the problem. All I know for sure is that the account is locked out several (5+) times a day, I can't even be sure it's due to failed login attempts as there is no record of failure until the account is locked. So far I have tried; Logging the account out of everything we can think of and back in with the new password Scanning the user's box for any non standard software which might perform an LDAP lookup Checking all installed services on our production boxes to check none are attempting to run under the account Changing the user back to their old password (Problem persists so perhaps password change is a red herring) Wireshark on a box where lots of LDAP authentication is performed - Rejects only occur after account is already locked out Clearing the credential cache in - Control Panel - User Accounts - Advanced Looking at the local I'm at a loss for what to try. I am happy to try any suggestions you have in order to diagnose the issue. I think my question boils down to a simple request; I need a technique for deriving the source (Application/Host) of the invalid login attempts which are causing the account to be locked. I'm not sure if that's even possible but I suspect there must be more I can try. Many thanks, CityView

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • How to find the source of a cryptic event viewer log

    - by mlsteeves
    I'm looking at the eventviewer logs, and I see a bunch Error entries in the Application log. (Windows Server 2008 R1). There is an error written to the logs about every 4 seconds. I need to find out which application is causing these events, is there anyway to find this out? Here is what each look like: Error 12/2/2010 12:00:09 PM Application 0 None The details for each error: Log Name: Application Source: Application Date: 12/2/2010 12:00:09 PM Event ID: 0 Task Category: None Level: Error Keywords: Classic User: N/A Computer: computer.domain Description: The description for Event ID 0 from source Application cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: the message resource is present but the message is not found in the string/message table Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application" /> <EventID Qualifiers="0">0</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-12-02T20:00:09.000Z" /> <EventRecordID>237167</EventRecordID> <Channel>Application</Channel> <Computer>computer.domain</Computer> <Security /> </System> <EventData> <Binary>534F434B...</Binary> </EventData> </Event>

    Read the article

  • PackageForTheWeb "should not be executed directly"

    - by Sergei Ousynin
    When trying to install package (specifically, chipset drivers for Nexcom PEAK-series computer, but I guess it doesn't really matters) I get "PackageForTheWeb Error" saing "This program is used internally by PackageFromTheWeb. It should not be executed directly." I get this error on both Windows 2000 (where it should be installed) and Windows 7. The original driver CD was lost long time ago, so the package was downloaded from the Nexcom website, found no info about installer prerequisites there. What should I install (or anything else) to install this package? UPD: I found a solution - unpacked the package and executed setup.exe that was inside. But I'm pretty sure it's not an action supposed by vendor, so answers are still welcome.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • SSO to multiple websites from Sharepoint website

    - by Aico
    We have an intranet based on Sharepoint 2010. In this intranet we have several links to other webservers within the same Active Directory, for example a link to our Outlook Web Access site on our Exchange 2010 environment. We have three different setups which visit this Sharepoint environment and the other webservers: Windows 7 clients that are a member of the Active Directory Home pc's that connect through a SSL VPN appliance Standalone thin clients (Windows 7 embedded) within the corporate network The goal is to let people only sign in once. In the first group this isn't a problem because the AD Integrated Authentication works fine and the Windows logon is passed on to Sharepoint and the other webservers. The second group is also working fine because of the LDAP integration that the SSL VPN appliance uses. The third group is however experiencing issues. They need to enter their credentials everytime they click a link to another webserver. They first need to enter credentials for accessing the Sharepoint environment. When clicking the link for their webmail they have to re-enter their credentials, and so on. Can someone tell me what the best solution would be to also get SSO working fine for the third group? Some extra information: We also have a Forefront TMG server in our environment. I read somewhere that Forefront might be part of a solution for this problem, but not sure how. Maybe someone here can help me? Look forward to some help. Best regards, Aico

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by user28146
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • WPF TreeView PreviewMouseDown on TreeViewItem

    - by imekon
    If I handle the PreviewMouseDown event on TreeViewItem, I get events for everything I click on that TreeViewItem include the little +/- box to the left (Windows XP). How can I distinguish that? I tried the following: // We've had a single click on a tree view item // Unfortunately this is the WHOLE tree item, including the +/- // symbol to the left. The tree doesn't do a selection, so we // have to filter this out... MouseDevice device = e.Device as MouseDevice; // This is bad. The whole point of WPF is for the code // not to know what the UI has - yet here we are testing for // it as a workaround. Sigh... // This is broken - if you click on the +/- on a item, it shouldn't // go on to be processed by OnTreeSingleClick... ho hum! if (device.DirectlyOver.GetType() != typeof(Path)) { OnTreeSingleClick(sender); } What I'm after is just single click on the tree view item excluding the extra bits it seems to include.

    Read the article

  • SCOM 2007 versus Zenoss (or other open source)

    - by TheCleaner
    I've taken the liberty to test both SCOM 2007 and Zenoss and found the following: SCOM 2007 Pros: Great MS Windows server monitoring and reporting In-depth configuration and easily integrates into a "MS datacenter" Cons: limited network device monitoring support (without 3rd party plugins) expensive difficult learning curve Zenoss Pros: Open Source (free) decent server monitoring for Windows, great monitoring for Linux decent network device monitoring Cons: not as in-depth as SCOM (for Windows at least) So my question to you folks is this: Given the above, and given that I'm trying to monitor 55 Windows servers, 1 Linux server, 2 ESX servers, and Juniper equipment...which would you recommend?

    Read the article

  • How to kill a single request in IIS?

    - by Petter Brodin
    I'm using IISPeek and can see that there's a single request that's hanging on a buggy page. I've fixed the bug so that others who open the page won't experience the problem, but the page is still active after almost an hour. I'd rather not stop the whole application as there are some important processes currently running. Via IISPeek I have the request number (9f0002008001238e) and clientIP. Could any of that be used to stop the ongoing request?

    Read the article

  • How to add an iptables rule with source IP address

    - by ???
    I have a bash script that starts with this: if [[ $EUID -ne 0 ]]; then echo "Permission denied (are you root?)." exit 1 elif [ $# -ne 1 ] then echo "Usage: install-nfs-server <client network/CIDR>" echo "$ bash install-nfs-server 192.168.1.1/24" exit 2 fi; I then try to add the iptables rules for NFS as follows: iptables -A INPUT -i eth0 -p tcp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 111 -m state --state ESTABLISHED -j ACCEPT service iptables save service iptables restart I get the error: Try iptables -h' or 'iptables --help' for more information. Bad argument111' Try iptables -h' or 'iptables --help' for more information. Bad argument111' Saving firewall rules to /etc/sysconfig/iptables: ^[[60G[^[[0;32m OK ^[[0;39m]^M Flushing firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Setting chains to policy ACCEPT: filter ^[[60G[^[[0;32m OK ^[[0;39m]^M Unloading iptables modules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Applying iptables firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Loading additional iptables modules: ip_conntrack_netbios_ns ^[[60G[^[[0;32m OK ^[[0;39m]^M When I open /etc/sysconfig/iptables these are the rules: # Generated by iptables-save v1.3.5 on Mon Mar 26 08:00:42 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [466:54208] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p esp -j ACCEPT -A RH-Firewall-1-INPUT -p ah -j ACCEPT -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Mon Mar 26 08:00:42 2012 ~ "/etc/sysconfig/iptables" 32L, 1872C I've also tried: iptables -I RH-Firewall-1-INPUT 1 -m state --state NEW -m tcp -p tcp --source $1 --dport 111 -j ACCEPT iptables -I RH-Firewall-1-INPUT 2 -m udp -p udp --source $1 --dport 111 -j ACCEPT

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >