Search Results

Search found 30046 results on 1202 pages for 'linq via c series'.

Page 290/1202 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Can TP-Link router make phone calls?

    - by Umair Ashraf
    I have a TP-Link router with DSL service provided by a local company which serves it over the landline phone. My landline cord is plugged into an ethernet router which is then plugged into TP-Link wireless router. I can access internet with this wireless router all over my home with all computers. Landline Cord [into] Ethernet Router [into] TP-Link Wireless Router [air] Computers I would add that landline cord is also into a phone device which I use to make calls and that's not cordless. Now I am accessing internet via WiFi on my laptop and want to ask if is this possible to make landline calls via this same computer I am surfing internet through? What I am asking it to a dial-up via TP-Link router that goes through landline. You see the landline cord is the actual data gateway and is also used to make calls. So it can simultaneously send Data and Voice over the same wire.

    Read the article

  • OBIEE Version 11.1.1.7.140527 Now Released

    - by Lia Nowodworska - Oracle
    (in via Martin) The Oracle Business Intelligence Enterprise Edition (OBIEE) 11g 11.1.1.7.140527 Bundle Patch is now available to download via My Oracle Support | Patches & Updates. This is provided as single Bundle Patch  Patch  18507268 and is comprised of the following: Patch 16913445 - 1 of 8 Oracle BI Installer (BIINST) Patch 18507640 - 2 of 8 Oracle BI Publisher (BIP) Patch 18657616 - 3 of 8 EPM Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM) Patch 18507802 - 4 of 8 Oracle BI Server (BIS) Patch 18507778 - 5 of 6 Oracle BI Presentation Services (BIPS) Patch 17300045 - 6 of 8 Oracle Real-Time Decisions (RTD) Patch 16997936 - 7 of 8 Oracle BI ADF Components (BIADFCOMPS) Patch 18507823 - 8 of 8 Oracle BI Platform Client Installers and MapViewer NOTE: Also required to be downloaded: Patch 16569379 - Dynamic Monitoring Service patch This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0, 11.1.1.7.1, 11.1.1.7.131017, 11.1.1.7.140114, 11.1.1.7.140225 and 11.1.1.7.140415 NOTE: It is also available for Exalytics customers who have applied the Exalytics PS3 patch. For more information refer to: OBIEE 11g 11.1.1.7.140527 Bundle Patch is Available for OBIEE ( Doc ID 1676798.1 ) The OBIEE Suite Bundle Patches are cumulative - the content of the previous 11.1.1.7.x bundle patches are included in this latest bundle patch. Ensure to review the Readme documentation for further important patch information.  This is available via the My Oracle Support | Patches & Updates screen when downloading. Keep up to-date with the latest OBIEE Patches and Patch Set Updates by visiting OBIEE 11g: Required and Recommended Patches and Patch Sets (Doc ID 1488475.1 )

    Read the article

  • Squid 3 and Internet Explorer 11 with authentication

    - by StBlade
    Need some help. My college has up until now been running Squid 3 on Ubuntu 12.02 and now 14.04 successfully. That was till recently. Our WSUS server is dishing out updates to all our workstations of which Internet Explorer 11 is one of them. Now all of a sudden users do not need to authenticate via the squid proxy to be able to use the internet. This makes it rather difficult as I also use SARG to generate usage logs for all users each day. All our workstations also have Chrome on them, and Chrome authenticates fine via the Squid proxy. Doing a couple of Googles, I ran into and article, where someone made mention that Microsoft has deprecated digest and basic authentication from IE 11. Reason was given that Office 2013 was giving problems as it was not giving the popup screen for authentication when Office tries to download templates from the internet. I have run into this problem, but by setting those sites to not authenticate via squid fixed the problem. Has anyone else run into something similiar? Would changing to NTLM or Kereberos be a solution?

    Read the article

  • Oracle Enterprise Manager Extensibility News - June 2014

    - by Joe Diemer
    Introducing Extensibility Exchange Version 2 On the heals of Enterprise Manager 12c Release 4 this week comes version 2.0 of the Extensibility Exchange.  A new theme allows optimal viewing on a number of different computing devices from large monitor displays to tablets to smartphones.   One of the first things you'll notice is a scrollable banner with the latest news related to Enterprise Manager and extensibility.  Along with the "slider" and the latest entries from Oracle and the Partner community, new features like a tag cloud and an auto-complete search box provide a better way to find the plug-in, connector or other Enterprise Manager entity you are looking for.  Once you find it, a content details page with specific info related to that particular entity will enable you to access it at the provider's site and also rate and comment on that particular item. You can also send an email from the content details page which is routed to the developer.   And if you want to use version 1 of the Extensibility Exchange instead, you will be able to do so via the "Classic" option.  Check it out today at http://www.oracle.com/goto/emextensibility. Recent Additions from Oracle's Partner Community A number of important 3rd party plug-ins have been contributed by Oracle's partner community, which can be accessed via the Extensibility Exchange or by clicking the links in this blog: Dell Open Manage Fusion I-O ION Accelerator NetApp SANtricity E-Series PostgreSQL by Blue Medora You can also check out the following best practices and labs available via the Exchange: Riverbed Stingray Traffic Manager Reference Architecture Datavail Alert Optimizer Custom Templates Apps Associates' Oracle Enterprise Manager "Test Drives" for Oracle Database 12c Management Oracle Enterprise Manager Monitoring Essentials Oracle Application Management Suite for Oracle E-Business Suite

    Read the article

  • Message Driven Bean JMS integration

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4.1 and above the product introduced the concept of real time JMS integration within the Framework for interfacing. Customer familiar with older versions of the Framework will recall that we used a component called the Multi-purpose Listener (MPL) which was a very light service bus for calling interface channels (including JMS). The MPL is not supplied with all products and customers prefer to use Oracle SOA Suite and native methods rather then MPL. In Oracle Utilities Application Framework V4.1 (and for Oracle Utilities Application Framework V2.2 via Patches 9454971, 9256359, 9672027 and 9838219) we introduced real time JMS integration natively for outbound JMS integration and using Message Driven Beans (MDB) for incoming integration. The outbound integration has not changed a lot between releases where you create an Outbound Message Type to indicate the record types to send out, create a JMS sender (though now you use the Real Time Sender) and then create an External System definition to complete the configuration. When an outbound message appears in the table of the type and external system configured (via a business event such as an algorithm or plug-in script) the Oracle Utilities Application Framework will place the message on the configured Queue linked to the JMS Sender. The inbound integration has changed. In the past you created XAI Receivers and specified configuration about what types of transactions to process. This is now all configuration file driven. The configuration files for the Business Application Server (ejb-jar.xml and weblogic-ejb-jar.xml) define Message Driven Beans and the queues to monitor. When a message appears on the queue, the MDB processes it through our web services interface. Configuration of the MDB can be native (via editing the configuration files) or through the new user exit capabilities (which is aimed at maintaining custom configuration across upgrades). The latter is better as you build fragments of configuration to make it easier to maintain. In the next few weeks a number of new whitepaper will be released to illustrate the features of the Oracle WebLogic JMS and Oracle SOA Suite integration capabilities.

    Read the article

  • Bug fix for Eclipse runtime plugin

    - by Peter Benedikovic
    This blog is intended to inform about bug fix that solves this issue. Before continuing further, one important note – the linux and mac users do not need to read further because this bug appears only on Windows.  The problem was that the runtime plugin registered new runtime and server each time the Eclipse started. Users ended up with server view looking like this: I have created new runtime plugin which is now available at the update site http://download.java.net/glassfish/eclipse/indigo (or the same ending with juno for Juno users). You will still need to unistall the buggy plugin and (optionally but recommended) to remove runtimes created by this plugin. Here is the guide how to install bugfix: Uninstall buggy runtime plugin via menu Help->About Eclipse->Installation details. Remove runtimes created by old plugin – via Window->Preferences->Server->Runtime Environment. After pressing remove button you may be asked if you want to remove also the servers based on runtime being removed. Recommended is to do so. Now you can install new runtime plugin. Go to Help->Install New Software. You may ask why I haven‘t provided the update for buggy runtime which could be installed via Check for updates feature of Eclipse. It has two main reasons: The bug fix is needed only for Windows users so I didn't want to bother other users by updating working plugin. The runtime plugin has had structure that was not quite suitable for Eclipse update. This structure is now changed so future bugs (I am sure that there will be no such ;)) can be fixed by standard update. Have a good one!

    Read the article

  • Is there a path from OpenSCAD to WebGL?

    - by Andrew Domaszek
    tl;dr version, Does a software path exist to convert from openscad input to render on a website via webgl? What I'm trying to do is display OpenSCAD created designs uploaded via post on a LEMPy driven website; this is currently done by outputting to a static png image. I would like to render one of openscad's export formats(wikibooks) and convert for display in javascript/webgl through some software path available via linux batch. CPU and RAM are relatively unconstrained but no GPU available. I've looked into CopperLeicht, but it requires CopperCube output which would require windows or wine. FOSS is preferred but not required. edit: I found three.js has converters for fbx, dae, obj, and 3ds formats which might be helpful as one of the last steps.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Why do my gvfs mounts not show up under ~/.gvfs?

    - by kynan
    From what I read, when mounting a network share via nautilus or gvfs-mount the mount point should be in ~/.gvfs. This seems not to be the case for me: I tried mounting both an FTP and SMB share via both nautilus and gvfs-mount under both Ubuntu Maverick and Natty and in none of the cases did I see any mount point under ~/.gvfs. I can access the shares just find in nautilus, but I want to have access via the command line, which is why I need a mount point in the file system. Edit: Debugging following James Henstridge's answer and enzotib's comment revealed that on my laptop gvfs-fuse-daemon is running and consequently gvfs mounts show up in ~/.gvfs, whereas on the 2 workstations where ~/.gvfs remained empty gvfs-fuse-daemon was not running. On all 3 machines there are other gvfs processes running: gvfsd, gvfs-afc-volume-monitor, ... On the laptop, mount | fgrep gvfs yields gvfs-fuse-daemon on /home/xxx/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=xxx) That raises the questions: How are shares mounted without gvfs-fuse-daemon running? Is there no mount point created in that case and is every access to the share a gvfs library call? Which daemon is responsible? gvfsd? What's the role of gvfs-fuse-daemon? Does it only create a fuse mount point in ~/.gvfs?

    Read the article

  • How to solve SocketException: Permission denied: connect

    - by luxinxian
    I recently encountered a problem getting a headache, need help... System consists of: Two subsystems, called A, B (each running on a standalone tomcat instance), currently running on the same machine. A invoke B's service via spring httpInvoker(http). B system also invoke other system's services via http. Symptoms: 1, the system starts to run normally after 10-15 days; 2, the system will run for a period of time after an exception: org.springframework.remoting.RemoteAccessException: Could not access HTTP invoker remote service at [http://xxx.xxx.xxx.xxx/remoting/call]; nested exception is java. net.SocketException: Permission denied: connect 3, when the exception occour, it continues, not only occasional .(it looks like some resources exhausted, but cpu rate < 5%, memory < 15%, network < 5%) 4, A, B when the system call fails, B system via http call to an external service also failed for the same exception. 5, closed two tomcat services, restart, and working properly. So repeatedly (step 1 - 5), has not found the root reason. Emvironment: windows 2008 R2 tomcat7.0.42 x86_64 oralce-jdk-1.7.0_40 Any ideas?

    Read the article

  • Can I force NFS automounts to use NFSv3?

    - by Steve
    I have a linux server that is exporting NFSv4 as well as NFSv3. I have a Fedora14 client that is defaulting to NFSv4 when automounting NFS shares off of the linux server, and it seems to be causing some problems. All my other linux clients on the network are mounting via NFSv3 without issue, so is there a way I can tell automount to mount the share via v3? I am pulling my automount maps via LDAP, with an entry in my /etc/auto.master file like so: +auto_master, so I assume it's a bit different than listing options with a regular automount map? (.i.e. /home --nfsvers=3 fileserver:/DATA)

    Read the article

  • My wifi internet router connection resets when more devices connected

    - by joeeoj
    The wifi internet router is connected directly to Internet cable. The main Pc is attached to it via LAN cable, while 1 laptop and 3 mobile phones connect to it via wifi. Whenever 2 or more devices connect via wifi, the internet connection breaks after one minute and internet connection resets. I tracked this behaviour for weeks, and came to conclusion: It seems like some 'device 1' got IP then it went to suspend mode. Then 'device 2' connected to router and got the same IP. Then the 'device 1' woke up from suspend mode and tried to use his old IP. The router sees that 2 same IP addresses exists and automatically reset internet connection. Is this possible? Have I tracked the problem correctly and how to solve it? The router is set to lease 100 IP addresses to devices who try to connect. The password is strong and no hacker's device is being connected to my wifi network. Tried changing password and AP's name.

    Read the article

  • SQL Authority News – Secret Tool Box of Successful Bloggers: 52 Tips to Build a High Traffic Top Ranking Blog

    - by Pinal Dave
    When I started this blog, it was meant as a bookmark for myself for helpful tips and tricks.  Gradually, it grew into a blog that others were reading and commenting on.  While SQL and databases are my first love and the reason I started this blog, the side effect was that I discovered I loved writing.  I discovered a secret goal I didn’t even know I wanted – I wanted to become an author.  For a long time, writing this blog satisfied that urge.  Gradually, though, I wanted to see my name in print. 12th Book Over the past few years I have authored and co-authored a number of books – they are all based on my knowledge of SQL Server, and were meant to spread my years of experience into the world, to share what I have learned with my community.  I currently have elevan of these “manuals” available for sale.  As exciting as it was to see my name in print, I still felt that there was more I could do as an author. That is when I realized that I am more than just a SQL expert.  I have been writing this blog now for more than 10 years, and it grew from a personal bookmark to a thriving website with over 2 million views per month.  I thought to myself “I could write a book about how to create a successful blog!”  And that is exactly what I did.  I am extremely excited to share with all of you my new book – “Secret Toolbox of Successful Bloggers.” A Labor of Love This project has been a labor of love for me.  It started out as a series for this blog – I would post one article a week until I felt the topic had been covered.  I found that as I wrote, new topics kept popping up in my mind, and eventually this small blog series grew into a full book.  The blog series was large enough to last a whole year, so I definitely thought that it could be a full book.  Ideas on how to become a successful blogger were so frequent that, I will admit, I feel like there is so much I left out of this book.  I had a lot more to say than I originally thought! I am so excited to be sharing this book with all of you.  I am so passionate about this topic, and I feel like there are so many people who can benefit from this book.  I know that when I started this blog, I did not know what I was doing, and I would have loved a “helping hand” to tell what to do and what not to do.  If this book can act that way to any of my readers, I feel it is a success. Rules of Thumb If you are interested in the topic of becoming a blogger, as you read this book, keep in mind that it is suggestions only.  Blogging is so new to the world that while there are “rules of thumb” about what to do and what not to do, a map of steps (“first, do x, then do y”) is not going to work for every single blogger.  This book is meant to encourage new bloggers to put their content out there in the world, to be brave and create a community like the one I have here at SQL Authority.  I have gained so much from this community, I wanted to give something back, and this book is just one small part. I hope that everyone who reads this books finds at least one helpful tip, and that everyone can experience the joy of blogging.  That is the whole reason I wrote this book, and what I hope everyone takes away from it. Where Can You Get It? You can get the book from following URL: Kindle eBook | Print Book Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 21-27, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 21-27, 2012. OTN Architect Day: Los Angeles This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. [NOTE: The event was last week, of course. Big thanks to the session presenters and especially to those Angelinos who came out for the event.] WebLogic Server 11gR1 Interactive Quick Reference"The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Podcast: Are You Future Proof? The latest OTN ArchBeat Podcast series features Oracle ACE Directors Ron Batra, Basheer Khan, and Ronald van Luttikhuizen, three practicing architects in an open discussion about how changes in enterprise IT are raising the bar for success for software architects and developers. Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as 'TEN." My score? Never mind. Advanced Oracle SOA Suite OOW 2012 PresentationsThe Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes contain the articles we’ve written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Oracle’s Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark RittmanOracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. OW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Thought for the Day "Not everything that can be counted counts, and not everything that counts can be counted." — Albert Einstein (March 14, 1879 – April 18, 1955) Source: Quotes For Software Engineers

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Why does WebDAV fail from inside home network

    - by Claus
    On my OSX server there is a folder that is configured in the Server App to be accessible via WebDAV. This folder is used to sync OmniFocus. On my router, I have set up a dynamic dns. When I am outside my home network (physically away or when connected via a vpn), I can connect and sync fine via: https://<server name from dyndns>/<username>/<path to WebDAV folder> However, when I am in my home network, the connection to WebDAV does not work (other connections, AFP, f.ex, do work). What could be some reasons why I can't connect to WebDAV from within my home network? What log files could give hints and where are they stored? I am running OSX server 10.9.3. and server.app. Thanks for your help.

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • Monit can't detect MySQL, but I can

    - by Matchu
    Monit is configured to watch MySQL on localhost at port 3306. check process mysqld with pidfile /var/lib/mysql/li175-241.pid start program = "/etc/init.d/mysql start" stop program = "/etc/init.d/mysql stop" if failed port 3306 protocol mysql then restart if 5 restarts within 5 cycles then timeout My application, which is configured to connect to MySQL via localhost:3306, is running just fine and can access the database. I can even use MySQL Query Browser to connect to the database remotely via port 3306. The port is totally open and possible to connect to. Therefore, I'm pretty darn certain that it's running. However, running monit -v reveals that Monit cannot detect MySQL on that port. 'mysqld' failed, cannot open a connection to INET[localhost:3306] via TCP This happens consistently, until Monit decides not to track MySQL anymore, as configured. How can I begin to troubleshoot this issue?

    Read the article

  • How can I get run Ubuntu Desktop on my Galaxy Nexus?

    - by Jack Senechal
    On the Ubuntu for phones site it advertises the desktop view feature: "The phone becomes a full PC and thin client when docked.". And there's the demo by Canonical of something similar running under Ubuntu for Android. I realize they're different systems, but the end effect is in both is to have a full Ubuntu system running on the phone. I've installed Ubuntu Touch Preview on my Galaxy Nexus (toro), and it's working as expected (no cellular signal, but wifi works, etc). But when I plug in a monitor via HDMI it just mirrors the phone's touch display. There's also currently no bluetooth support for attaching keyboard and mouse. Keyboard only kind of works via USB, and mouse not at all. I've also tried running Ubuntu under Android via VNC, but the lack of responsiveness of VNC makes it impractical for daily use. I'd consider that route again if there is some way to make the UI more responsive. So the question is, how can set up my phone to run Ubuntu Desktop in a way that's useable as a laptop replacement? Is there a way to enable Desktop View on Ubuntu Touch? Or can I run Ubuntu for Android as in the previously referenced demo? Plugging into a monitor would be OK, but I'd love to be able to use the desktop interface with mouse and keyboard through the phone's screen as well. Touch input and an onscreen keyboard would be a plus but is definitely not necessary.

    Read the article

  • OS X Snow Leopard, change file permissions on copy

    - by Francesco K
    I work with OS X, Snow Leopard and need to allow users to make copies of files (templates) located in a read-only repository for subsequent editing. The repository is located on a separate physical drive mounted to the OS X boot volume. As this is a shared computer in a school environment, all users access the machine via a single login ("user_local"). Whether using POSIX permissions or ACLs, the use case requires the file permissions to change from "read" to "read write" as they get copied to the "user_local" home directory. Googling around has not yielded anything that would indicate that this is possible via the Snow Leopard permission system. Question 1: Is this in fact possible via the permission system? If so, how? Question 2: If not possible, how would one go about solving this problem? I imagine this to be a fairly common use case so there must be a workable solution for it out there. Thanks.

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • Implementing the transport layer for a SIP UAC

    - by Jonathan Henson
    I have a somewhat simple, but specific, question about implementing the transport layer for a SIP UAC. Do I expect the response to a request on the same socket that I sent the request on, or do I let the UDP or TCP listener pick up the response and then route it to the correct transaction from there? The RFC does not seem to say anything on the matter. It seems that especially using UDP, which is connection-less, that I should just let the listeners pick up the response, but that seems sort of counter intuitive. Particularly, I have seen plenty of UAC implementations which do not depend on having a Listener in the transport layer. Also, most implementations I have looked at do not have the UAS receiving loop responding on the socket at all. This would tend to indicate that the client should not be expecting a reply on the socket that it sent the request on. For clarification: Suppose my transport layer consists of the following elements: TCPClient (Sends Requests for a UAC via TCP) UDPClient (Sends Requests for a UAC vid UDP) TCPSever (Loop receiving Requests and dispatching to transaction layer via TCP) UDPServer (Loop receiving Requests and dispatching to transaction layer via UDP) Obviously, the *Client sends my Requests. The question is, what receives the Response? The *Client waiting on a recv or recvfrom call on the socket it used to send the request, or the *Server? Conversely, the *Server receives my requests, What sends the Response? The *Client? doesn't this break the roles of each member a bit?

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Sound in Ubuntu 12.10 on Chromebook

    - by Paul Mennega
    I've got an HP Chromebook 14 that I've rooted and installed Ubuntu via Crouton (upgraded to 12.10) and most things are working nicely. One issue that I am having trouble solving relates to sound. I've got sound output that I can control via the individual app (e.g. Youtube), but when trying to control the volume via the built-in xfce4-mixer, I get the following error: GStreamer was unable to detect any sound devices. Some sound system specific GStreamer packages may be missing. It may also be a permissions problem. Opening a terminal, I can run the alsamixer and control the volume levels. It is reporting back that the card is CRAS. I'd love to be able to do it from XFCE and the built-in mixer. I've tried re-installing the GStreamer packages, but no luck. It seems like I am close but not sure where to go from here. On a maybe unrelated note, plugging headphones into the headphone jack in Ubuntu also does not stop the sound from coming from the speakers and nothing comes through the headphones. Switching back to ChromeOs, everything works as expected, so I think that it's a driver issue.

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >