Search Results

Search found 4578 results on 184 pages for 'ibm cell'.

Page 161/184 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Webcast Q&A: Demystifying External Authorization

    - by B Shashikumar
    Thanks to everyone who joined us on our webcast with SANS Institute on "Demystifying External Authorization". Also a special thanks to Tanya Baccam from SANS for sharing her experiences reviewing Oracle Entitlements Server. If you missed the webcast, you can catch a replay of the webcast here.  Here is a compilation of the slides that were used on today's webcast.  SANS Institute Product Review: Oracle Entitlements Server We have captured the Q&A from the webcast for those who couldn't attend. Q: Is Oracle ADF integrated with Oracle Entitlements Server (OES) ? A:  In Oracle Fusion Middleware 11g and later, Oracle ADF, Oracle WebCenter, Oracle SOA Suite and other middleware products are all built on Oracle Platform Security Services (OPSS). OPSS privodes many security functions like authentication, audit, credential stores, token validaiton, etc. OES is the authorization solution underlying OPSS. And OES 11g unifies different authorization mechanisms including Java2/ABAC/RBAC.  Q: Which portal frameworks support the use of OES policies for portal entitlement decisions? A:  Many portals including Oracle WebCenter 11g  run natively on top of OES. The authorization engine in WebCenter is OES. Besides, OES offers out of the box integration with Microsoft SharePoint. So SharePoint sites, sub sites, web parts, navigation items, document access control can all be secured with OES. Several other portals have also been secured with OES ex: IBM websphere portal Q:  How do we enforce Seperation of Duties (SoD) rules using OES (also how does that integrate with a product like OIA) ? A:  A product like OIM or OIA can be used to set up and govern SoD policies. OES enforces these policies at run time. Role mapping policies in OES can assign roles dynamically to users under certain conditions. So this makes it simple to enforce SoD policies inside an application at runtime. Q:  Our web application has objects like buttons, text fields, drop down lists etc. is there any ”autodiscovery” capability that allows me to use/see those web page objects so you can start building policies over those objects? or how does it work? A:  There ae few different options with OES. When you build an app, and make authorization calls with the app in the test environment, you can put OES in discovery mode and have OES register those authorization calls and decisions. Instead of doing  this after the fact, an application like Oracle iFlex has built-in UI controls where when the app is running, a script can intercept authorization calls and migrate those over to OES. And in Oracle ADF, a lot of resources are protected so pages, task flows and other resources be registered without OES knowing about them. Q: Does current Oracle Fusion application use OES ? The documentation does not seem to indicate it. A:  The current version of Fusion Apps is using a preview version of OES. Soon it will be repalced with OES 11g.  Q: Can OES secure mobile apps? A: Absolutely. Nowadays users are bringing their own devices such as a a smartphone or tablet to work. With the Oracle IDM platform, we can tie identity context into the access management stack. With OES we can make use of context to enforce authorization for users accessing apps from mobile devices. For example: we can take into account different elements like authentication scheme, location, device type etc and tie all that information into an authorization decision.  Q:  Does Oracle Entitlements Server (OES) have an ESAPI implementation? A:  OES is an authorization solution. ESAPI/OWASP is something we include in our platform security solution for all oracle products, not specifically in OES Q:  ESAPI has an authorization API. Can I use that API to access OES? A:  If the API supports an interface / sspi model that can be configured to invoke an external authz system through some mechanism then yes

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • pppd disconnects from 3G, doesn't reconnect, w/ persist set

    - by bytenik
    I am trying to configure pppd to connect to a 3G network (Sprint, in this case) and then stay connected, reconnecting automatically if the remote connection is terminated. I have enabled the persist option. My configuration file is as follows: hide-password noauth connect "/usr/sbin/chat -v -f /etc/chatscripts/cellular" debug /dev/cell 921600 defaultroute noipdefault user " " persist maxfail 0 lcp-echo-failure 10 lcp-echo-interval 60 holdoff 5 However, when the peer disconnects the connection, pppd often waits a long time (substantially more than my holdoff) to reconnect the modem -- if it ever reconnects at all! An example log showing this: May 23 05:17:24 00270e0a8888 pppd[2408]: rcvd [LCP TermReq id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: LCP terminated by peer May 23 05:17:24 00270e0a8888 pppd[2408]: Connect time 60.1 minutes. May 23 05:17:24 00270e0a8888 pppd[2408]: Sent 0 bytes, received 0 bytes. May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down started (pid 2456) May 23 05:17:24 00270e0a8888 pppd[2408]: sent [LCP TermAck id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down finished (pid 2456), status = 0x0 May 23 05:17:24 00270e0a8888 pppd[2408]: Hangup (SIGHUP) May 23 05:17:24 00270e0a8888 pppd[2408]: Modem hangup May 23 05:17:24 00270e0a8888 pppd[2408]: Connection terminated. May 23 05:17:24 00270e0a8888 pppd[2408]: Terminating on signal 15 May 23 05:17:24 00270e0a8888 pppd[2408]: Exit. May 23 06:08:07 00270e0a8888 pppd[2500]: pppd 2.4.5 started by root, uid 0 May 23 06:08:10 00270e0a8888 pppd[2500]: Script /usr/sbin/chat -v -f /etc/chatscripts/cellular finished (pid 2530), status = 0x0 May 23 06:08:10 00270e0a8888 pppd[2500]: Serial connection established. May 23 06:08:10 00270e0a8888 pppd[2500]: using channel 11 The disconnect at the request of the peer occurs at 5:17, but the reconnect didn't happen until 6:08. I had a friend monitoring the server so I'm not certain that this wasn't a manual reconnection. Either way, it either took almost an hour to reconnect or never reconnected. Shouldn't persist + holdoff 5 cause this to automatically reconnect after 5 seconds of the link terminating?

    Read the article

  • How to make multiple Excel files open in ONE window/instance of Excel 2003 in Win 7

    - by Mark
    I'm running Excel 2003 on my new Windows 7 machine. (There is also a Excel 2010 starter pre installed that I do not use). I'm a heavy user of Excel. I use it all day every day. I often have 10 or 15 sheets open and once and many of them have cell references to each other. I also have a macro file that keeps all my short cuts. On my old W2K machine when I clicked on a .xls file or a shortcut to one to it would open that file in the existing instance of Excel. This is as it should be. I would have many files open, in only one "window" or instance of Excel. All the files could interact with each other, the cross file lookups worked, my macros worked and I could switch between workbooks with CTRL Tab or CTRL F6, I could move tabs from one workbook to another. On the new W7 machine clicking on an icon opens a NEW INSTANCE of Excel every time. This is terribly frustrating. None of my connecting spreadsheets work anymore. My macros don't work. I can't connect files, I can't move tabs. I'm stuck. I can't do my work! I can still open files in one instance by doing a CTRL-O and navigating, but I need to my files to work on a click. I'm guessing this is a flaw in the registry files, possibly because of the starter Excel 2010 that came preloaded on my new machine. Can you walk me through a registry edit to fix this bug? Is there an easier way than a registry edit?

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • What is a good WordPress theme for long Objective-C code samples [closed]

    - by willc2
    As some of you iPhone developers know, Objective-C can be a verbose language. Long, descriptive variable and method names are the norm. I'm not complaining, it makes code easier to read and code completion makes it easy to type. But damn! Check out this method name for getting a cell in a table view: -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath; I have a WordPress blog where I publish my code samples as I'm learning the language. One thing I hate on other blogs is how the code won't fit in a column without that scroll bar or without wrapping around. It really made it hard for me to read and comprehend method names back when I was a super-noob (six months ago). Right now I use the clean-looking Fazyvo 1.0 theme by noonnoo. I love the look of it but the columns are just too narrow and it doesn't have support for wider ones. I could hand-modify it but then I'd have to maintain/redo those changes every time I updated it. Instead, I'm looking for a nice theme that has width control built-in and looks good at larger font sizes. Can anyone help? Note: I use WP-CodeBox for code syntax highlighting.

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

  • What does this diagnostic output mean?

    - by ChrisF
    I recently had a fault with my broadband connection. It turned out to be a fault with the ISP's or teleco's equipment. My ISP posted this diagnostic, but while I understand it in general, I'd like to to know more about the details. I'm assuming that ATM means Asynchronous Transfer Mode and PPP means Point to Point Protocol. It was this that my router was indicating as the fault. xDSL Status Test Summary Sync Status: Circuit In Sync General Information NTE Status: NTE Power Status: Unknown Bypass Status: Upstream DSL Link Information Downstream DSL Link Information Loop Loss: 9.0 17.0 SNR Margin: 25 15 Errored Seconds: 0 0 HEC Errors: 0 Cell Count: 0 0 Speed: 448 8128 TAM Status: Successfully executed operation Network Test: Sub-Test Results Layer Name Value Status Modem pass Transmitter Power (Upstream) 12.4 dBm Transmitter Power (Downstream) 8.8 dBm Upstream psd -38 dBm/Hz Downstream psd -51 dBm/Hz DSL pass Equipment Vendor Name TSTC Equipment Vendor Id n/a Equipment Vendor Revision n/a Training Time 8 s Num Syncs 1 Upstream bit rate 448 kbps Downstream bit rate 8128 kbps Upstream maximum bit rate 1108 kbps Downstream maximum bit rate 11744 kbps Upstream Attenuation 3.5 dB Downstream Attenuation 0.0 dB Upstream Noise Margin 20.0 dB Downstream Noise Margin 19.0 dB Local CRC Errors 0 Remote CRC Errors 0 Up Data Path interleaved Down Data Path interleaved Standard Used G_DMT INP INP Upstream Symbols n/a INP Upstream Delay 4 ms INP Upstream Depth 4 INP Downstream Symbols n/a INP Downstream Delay 5 ms INP Downstream Depth 32 ATM Reason: No ATM cells received fail Number of cells transmitted 30 Number of cells received 0 number of Near end HEC errors 0 number of Far end HEC errors n/a PPP Reason: No response from peer fail PAP authentication nottested CHAP authentication nottested (I'm not sure that Super User is the best place to ask this, but two people have suggested I ask it here so here I am).

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • OpenOffice Calc: How can I count the number of different items with data pilot?

    - by manu
    Hi all, I have a rather long spreadsheet with historical information of issues solved by some user on a collaborative environment. The spreadsheet have the following (relevant) columns date, week no., project, author id, etc... The week no. is calculated from the date, is basically the year concatenated with the week number within that year; for instance, both 2009-02-18 and 2009-02-20 yield the week number 200908 - the 8th week of year 2009; and 2009-02-23 yields 200909 - the 9th week of year 2009. I need to count how many different users (given by author id) contributed to some project, on a weekly basis. I have setup a data pilot with the week as Row Field, the project as the Column Field, and count-author as the Data Field. However, this counts the author id as different instances. This is not what I need. I need to count how many different users contributed to each project on a weekly basis. I expect to get something like: projects week Project1 Project2 Project3 200901 10 2 200902 2 7 Each inner cell containing how many different users contributed. With the count-author configuration, what I get is how many contributions (total) got the project on that week. Is there a way to tell OpenOffice Calc to do what I want?

    Read the article

  • Java EE 7 Roadmap

    - by Linda DeMichiel
    The Java EE 6 Platform, released in December 2009, has seen great uptake from the community with its POJO-based programming model, lightweight Web Profile, and extension points. There are now 13 Java EE 6 compliant appserver implementations today! When we announced the Java EE 7 JSR back in early 2011, our plans were that we would release it by Q4 2012. This target date was slightly over three years after the release of Java EE 6, but at the same time it meant that we had less than two years to complete a fairly comprehensive agenda — to continue to invest in significant enhancements in simplification, usability, and functionality in updated versions of the JSRs that are currently part of the platform; to introduce new JSRs that reflect emerging needs in the community; and to add support for use in cloud environments. We have since announced a minor adjustment in our dates (to the spring of 2013) in order to accommodate the inclusion of JSRs of importance to the community, such as Web Sockets and JSON-P. At this point, however, we have to make a choice. Despite our best intentions, our progress has been slow on the cloud side of our agenda. Partially this has been due to a lack of maturity in the space for provisioning, multi-tenancy, elasticity, and the deployment of applications in the cloud. And partially it is due to our conservative approach in trying to get things "right" in view of limited industry experience in the cloud area when we started this work. Because of this, we believe that providing solid support for standardized PaaS-based programming and multi-tenancy would delay the release of Java EE 7 until the spring of 2014 — that is, two years from now and over a year behind schedule. In our opinion, that is way too long. We have therefore proposed to the Java EE 7 Expert Group that we adjust our course of action — namely, stick to our current target release dates, and defer the remaining aspects of our agenda for PaaS enablement and multi-tenancy support to Java EE 8. Of course, we continue to believe that Java EE is well-suited for use in the cloud, although such use might not be quite ready for full standardization. Even today, without Java EE 7, Java EE vendors such as Oracle, Red Hat, IBM, and CloudBees have begun to offer the ability to run Java EE applications in the cloud. Deferring the remaining cloud-oriented aspects of our agenda has several important advantages: It allows Java EE Platform vendors to gain more experience with their implementations in this area and thus helps us avoid risks entailed by trying to standardize prematurely in an emerging area. It means that the community won't need to wait longer for those features that are ready at the cost of those features that need more time. Because we have already laid some of the infrastructure for cloud support in Java EE 7, including resource definition metadata, improved security configuration, JPA schema generation, etc., it will allow us to expedite a Java EE 8 release. We therefore plan to target the Java EE 8 Platform release for the spring of 2015. This shift in the scope of Java EE 7 allows us to better retain our focus on enhancements in simplification and usability and to deliver on schedule those features that have been most requested by developers. These include the support for HTML 5 in the form of Web Sockets and JSON-P; the simplified JMS 2.0 APIs; improved Managed Bean alignment, including transactional interceptors; the JAX-RS 2.0 client API; support for method-level validation; a much more comprehensive expression language; and more. We feel strongly that this is the right thing to do, and we hope that you will support us in this proposed direction.

    Read the article

  • Relaying to tech "support" that computer is actually broken.

    - by Sion
    First some background: I have a Dell Inspiron 15R M050, it is still under the Dell limited warranty and the Best Buy Extended warranty. I am currently dual booting Debian Squeeze and Windows 7, the only reason I go into Windows is to play video games specifically steam games. Issue: When I play my games in Windows I am capable of playing for anywhere from 5 minutes to 2 hours before I suffer a hard-lock. I cannot alt-tab, ctrl-alt-delete, ctrl-shift-escape do anything for 2-3 minutes. After this hard-lock period everything runs fine, I can continue the game for probably another hour at least before I suffer another lock. Games: Borderlands, Splinter Cell: Chaos Theory, Starcraft 2, Garrys Mod What I have tried: Running the diagnostic suite in the dell bios, restoring the OEM Windows recovery partition on the HD, fresh installing Windows 7 Professional, updating BIOS, Calling tech support and having them run a software Hardware Diagnostics suite. The question: I think from the research that I have performed that it might be a lack of thermal paste on the CPU, would I be able to go to Best Buy and have them do a hardware diagnostic from the hardware level then have them be able to tell Dell that there is a hardware issue? Or would there be a different problem?

    Read the article

  • Linux wireless disconnect every 20 minutes

    - by james
    My laptop uses CentOS 6.3 with kernel 2.6.32-279.el6.x86_64. My wireless adaptor is Intel Corporation Centrino Wireless-N 1000. My wireless connection always get off after about 20 minutes. The network applet shows the connection is still on with good signal strength, but I just cannot load any web pages even the configuration page of the wireless router. The problem will continue until I disable and reconnect the wireless. Other devices like my cell phone uses the same wireless network without the problem. Even yesterday I'm using the same laptop with Fedora 17 without this problem. I also searched the internet and someone said running services NetworkManager and network simultaneously may be a problem. But I cannot stop any one of them because: if I stop network and start NetworkManager, the network service will start automatically; if I stop NetworkManager and run network, it says "Device does not seem to be present, delaying initialization." when trying to bringing on the wireless. What shall I do to get rid of the problem? Thank you very much!

    Read the article

  • Finding Leaders Breakfasts - Adelaide and Perth

    - by rdatson-Oracle
    HR Executives Breakfast Roundtables: Find the best leaders using science and social media! Perth, 22nd July & Adelaide, 24th July What is leadership in the 21st century? What does the latest research tell us about leadership? How do you recognise leadership qualities in individuals? How do you find individuals with these leadership qualities, hire and develop them? Join the Neuroleadership Institute, the Hay Group, and Oracle to hear: 1. the latest neuroscience research about human bias, and how it applies to finding and building better leaders; 2. the latest techniques to recognise leadership qualities in people; 3. and how you can harness your people and social media to find the best people for your company. Reflect on your hiring practices at this thought provoking breakfast, where you will be challenged to consider whether you are using best practices aimed at getting the right people into your company. Speakers Abigail Scott, Hay Group Abigail is a UK registered psychologist with 10 years international experience in the design and delivery of talent frameworks and assessments. She has delivered innovative assessment programmes across a range of organisations to identify and develop leaders. She is experienced in advising and supporting clients through new initiatives using evidence-based approach and has published a number of research papers on fairness and predictive validity in assessment. Karin Hawkins, NeuroLeadership Institute Karin is the Regional Director of NeuroLeadership Institute’s Asia-Pacific region. She brings over 20 years experience in the financial services sector delivering cultural and commercial results across a variety of organisations and functions. As a leadership risk specialist Karin understands the challenge of building deep bench strength in teams and she is able to bring evidence, insight, and experience to support executives in meeting today’s challenges. Robert Datson, Oracle Robert is a Human Capital Management specialist at Oracle, with several years as a practicing manager at IBM, learning and implementing latest management techniques for hiring, deploying and developing staff. At Oracle he works with clients to enable best practices for HR departments, and drawing the linkages between HR initiatives and bottom-line improvements. Agenda 07:30 a.m. Breakfast and Registrations 08:00 a.m. Welcome and Introductions 08:05 a.m. Breaking Bias in leadership decisions - Karin Hawkins 08:30 a.m. Identifying and developing leaders - Abigail Scott 08:55 a.m. Finding leaders, the social way - Robert Datson 09:20 a.m. Q&A and Closing Remarks 09:30 a.m. Event concludes If you are an employee or official of a government organisation, please click here for important ethics information regarding this event. To register for Perth, Tuesday 22nd July, please click HERE To register for Adelaide, Thursday 24th July, please click HERE 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contact: To register or have questions on the event? Contact Aaron Tait on +61 2 9491 1404

    Read the article

  • How to re-arrange Excel database from 1 long row, into 3 short rows of unequal lengths and automatically repeat the process?

    - by user326884
    This question is an extension/continuation of my previous question at How to re-arrange Excel database from 1 long row, into 3 short rows and automatically repeat the process? which was answered by Jason Lewis of which I'm grateful. But being a dummy in "Indirect' Excel function, I need assistance again : For example :- In Sheet A, Row 1 has the following data in each cell (all together 72 cells occupied): A1 B1 C1 D1 E1 F1 G1 H1 I1 J1 K1 L1 M1 N1 O1 P1 Q1 R1 S1 T1 U1 V1 W1 X1 Y1 Z1 AA1 AB1 AC1 AD1 AE1 AF1 AG1 AH1 AI1 AJ1 AK1 AL1 AM1 AN1 AO1 AP1 AQ1 AR1 AS1 AT1 AU1 AV1 AW1 AX1 AY1 AZ1 BA1 BB1 BC1 BD1 BE1 BF1 BG1 BH1 BI1 BJ1 BK1 BL1 BM1 BN1 BO1 BP1 BQ1 BR1 BS1 BT1 To be re-arranged into Sheet B in the following format: Row 1 : A1 B1 C1 D1 E1 F1 G1 H1 I1 J1 K1 L1 M1 N1 O1 P1 Q1 R1 S1 T1 U1 V1 W1 X1 Y1 Z1 AA1 AB1 AC1 AD1 AE1 AF1 AG1 AH1 AI1 Row 2 : AJ1 AK1 AL1 AM1 AN1 AO1 AP1 AQ1 AR1 AS1 AT1 AU1 AV1 AW1 AX1 AY1 AZ1 BA1 BB1 BC1 BD1 BE1 BF1 BG1 BH1 BI1 BJ1 BK1 Row 3 : BL1 BM1 BN1 BO1 BP1 BQ1 BR1 BS1 BT1 The Sheet A (database sheet) has a lot of rows (example 3,000 rows, each rows has 72 cells occupied with data), hence the Sheet B (reformatted database) is estimated to have 9,000 rows (i.e. 3 x 3,000) of unequal lengths. Thanking you in anticipation of your speedy response.

    Read the article

  • Reference entire excel 2007 sheet to another sheet

    - by Keikoku
    I have one sheet with 100 rows of data. I have a second sheet that will use the same data but with filters applied. In fact, I will have a dozen sheets, each with different filters. My goal is to have references from the every sheet to the first so that prior to filtering, they all contain exactly the same data. This way I only have to modify one sheet and all sheets will reflect the changes. The purpose is to create external links from word to excel to display specific rows, but there appears to be a limitation to linking where it displays absolutely everything that you see on the sheet itself (and each view must be different). I can manually reference the first cell and then drag the black box to easily expand it to the required number of rows, but that would require me to go into each sheet and drag the black box again whenever I add new entries to the master copy. Is there an easy way to do this? Note that the issue is the same as Easy way for users to update linked documents Excel 2007, except this time I am using a different approach. Solutions to both would be welcome.

    Read the article

  • Importing long numerical identifiers into Excel

    - by Niels Basjes
    I have some data in a database that uses ids that have the form of 16 digit numbers. In some situations i need to export the data in such a way that it can be manipulated in excel. So i export the data into a file and import it into excel. I've tried several file formats and I'm stuck. The problem I'm facing is that when reading a file into excel that has a cell that looks like a number then excel treats it as a number. The catch is that (as far as i can tell) all numerical values in excel are double precision floating point which have a precision of less than 16 digits. So my ids are changed: very often the last digit its changed to a 0. So far I've only been able to convince excel to keep the Id unchanged by breaking it myself: by adding a letter or symbol to the Id. This however means that in order to use the value again it must be "unbroken". Is there a way to create a file where i can specify that excel must treat the value as a text without changing the value? Or its there a way to let excel treat the value as a long (64bit integer)?

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • Fill down in Excel, but based on multiple values

    - by Jenn D.
    I have spreadsheets (not created by me) that have blank entries in one column where they should really have data. I want to take every empty cell and fill it with the nearest value above it. I'm looking for as little manual intervention as possible, because I'll have to do it repeatedly. I thought some previous version of Excel, or maybe another spreadsheet from the distant past, would do this by default -- that is, if you selected the column with foo and bar, and chose the equivalent of "fill down", you would get what's in the WANT column. What I actually get in Excel is the GET column. HAVE: WANT: GET: foo 1 foo 1 foo 1 2 foo 2 foo 2 bar 1 bar 1 foo 1 2 bar 2 foo 2 3 bar 3 foo 3 I'm worried that this might need a macro to be done properly. I used to be a whiz with Excel macros, and then suddenly they were all in VB. My fallback position will be to dump the whole thing to CSV and write a Python script, but if there's any way to do it in Excel that would be much preferable. Even if it involves a couple of different manual steps, that's fine; just not one step per group of lines. That is, a process of "copy the column, do X to it, cut and paste it back" would work, but "do X for each occurrence of foo or bar" won't. The files are too big for that. Any thoughts are appreciated!

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Use autocomplete in dropdown cells with Excel 2007?

    - by Martin
    I want to make a survey with Excel and I therefore have defined the cells for the answers as a dropdown cell which only accepts answers from a certain list, e. g.: The two Lists List1 and List2 (yellow cells) are the possible answers for the questions in Block 1.x resp. 2.x (blue) . There might be a block 4 with more questions, which again use List1 for their possible answers. My problem is: I'd like to be able to use the autocompleate feature to fill in the blue cells with the dropdown menu, so that the user only types 5 and it automatically expands to "5: extremely important" or "5: extremely difficult". According to my research on the www, this should be possible if I add the list with possible answers directly above the cells where autocomplete should work (I did this with the green helper cells which could be hidden) . But I have to enter at least 4 characters 5: e to get the autocompleted suggestion. Is there a way to make autocomplete already replace a "5" by the corresponding valid term? As the survey file shall be distributed to a lot of people "outside", I can not use VBA magic because it may be blocked on their computer and might not work. EDIT: it seems to have to do with the numbers I use: If I'd start my List items with A, B, C instead of 1, 2, 3, it would work perfectly. Excel seems to ignore the pure numbers when they are entered and does not try to autocomplete them.. is there a workaround? (I hope it is clear what I want, it seems a little difficult to explain.)

    Read the article

  • standard deviation on excel

    - by user270692
    so I have 3 data sets, which all came from the first data set using the excel program. all consisted of random numbers from =randbetween(0,100) starting from A2:A102 which is 100 integers. for data set b I had to enter=SUM(A2+5) press enter and drag it down to get the integers for cells B2:B102 and for C I had to do multiplication t get it. I did =PRODUCT(A2:A102*5). so everything was taken from data set A. now I did the formulas needed to do sample standard dev and mean(average) . for data set a and b the standard deviations were the same but the mean was larger in data set B of course because I added 5 to each cell in set A. my question is why wouldn't the standard deviation be the same for data set C also? if im using the info from data set A? and how do I calculate the standard deviation (sample) by hand so I can explain why the standard dev doesn't change but the mean does. I don't know what numbers to include in the formula for sample standard deviation.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >