Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 374/705 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

  • NetDiag + TCP Blocking?

    - by CrazyNick
    We are facing some issue with the sharepoint 2007 timer jobs everyday at a specific time, so decide to track the tcp blocking informartion during those hours using NetDiag tool. We are not able to find the required information if we uses "netdiag /test:ipsec", what is the command that can be used to pull the TCP blocking information and how to configure it? if i ran the command "netdiag /test:ipsec /debug" it is returning "IP Security test . . . . . . . . . : Skipped", what does it mean?

    Read the article

  • Payables Master Generic Datafix (MGD) Now Checks For Even More EBTax Corruption!!

    - by MargaretW
    The Payables MGD is a vital diagnostic that all R12/12.1 customers need to run regularly to check the data integrity of their Payables system. This script does not make any changes to your system, so it’s risk free and it produces a HTML formatted output showing which data corruption issues have been detected and provides the Doc ID’s that will be needed to fix them. This MGD diagnostic (version 120.92 and above) is even better than it used to be as it now checks for 11 new EBTax corruption signatures that Support was seeing on a consistent basis. These lengthy Service Requests could have been avoided with one run of the MGD which tells you right away if you have data corruption. It’s the first thing our Payables support engineers will have you run when you log an SR so why not be one step ahead? The new EBTax signatures that were included in this latest update to the MGD are pulled from the following common solutions documents: R12 E-Business Tax/Payables Data-Fixes: Cause and action to handle ZX_LINES_SUMMARY_U1 issue Doc ID 1152123.1 EB-Tax Data Corruption Issues & Recommended Solutions Doc ID 1316316.1 The specific issues that are now screened are detailed below: 1. TAXABLE_BASIS_FORMULA and MANUALLY_ENTERED_FLAG mismatch 2. ESTABLISHMENT_ID mismatch 3. TRX_NUMBER mismatch 4. TAX_RATE mismatch 5. Currency Conversion related columns mismatch in Migrated Invoices 6. HISTORICAL_FLAG and RECORD_TYPE_CODE mismatch 7. ADJUSTED_DOC_TRX_LEVEL_TYPE is NULL or APPLIED_FROM_TRX_LEVEL_TYPE is NULL 8. Missing Reversal Tax Distributions For Tax Distributions 9. Tax Lines for discarded or cancelled Transaction Lines are not marked as cancelled 10. Error AP_ERR_TAX_DIST_SYNC 11. AP_UNFROZEN_DIST_EXIST/Unfrozen Tax Distributions exist for this invoice Get Proactive – Check your system for these common EBTax issues and fix the data before it causes a problem. Access the MGD note and watch the video that explains how it works here - R12: Master GDF Diagnostic to Validate Data Related to Invoices, Payments, Accounting, Suppliers and EBTax [VIDEO] Doc ID 1360390.1

    Read the article

  • Remote desktop pressing Windows key randomly

    - by wikiti
    When I'm using remote desktop at work from one specific PC it has a quirk which is really quite irritating. I'll be typing away and suddenly the Windows key will be pressed, things start minimising, explorer launches, workstation gets locked (but it locks the remote workstation, so I have that wallpaper in the background, rather than the wallpaper from the local machine as I would expect). I've swapped keyboards, even though it didn't seem to be related to hardware and it made no difference. Done some internet digging and found someone saying it's related to using windows-l to lock the workstation. I've tried to stop myself using that shortcut to see if it helps and have failed miserably :D It's happening to at least one other user remote desktopping from the same machine to a different PC. Reinstalling isn't really an option (I don't admin the PC or I would have tried it, grr), although I could probably get it done if it would definitely resolve the problem. Any ideas?

    Read the article

  • Directory "Bookmarking" in Linux

    - by Jason R. Mick
    Aside from aliasing and links, is there an easy way in Linux to tag commonly used directories and to navigate to a commonly used directory from the terminal. To be clear the disadvantages I see with alternative approaches, and why I want a bookmark/favorites like system: alias Cons: Too specific (every new favorite requires a new alias...although you could in theory make an alias that echo append your dir as a new alias, which would be sort of clever). Can't nest favorites in folders (can't think of a simple solution to this outside of heavy config scripting). links Cons: Clutter directory make ls a headache. pushd/popd Cons: Non-permanent (without shell config file scripting), can't nest favorites in directories, etc. Granted I have multiple ideas for making my own non-standard solution, but before I have at it I wanted to get some perspective on what's out there and if there is nothing, what is a recommended approach. Does anyone know of such a favorites/bookmark-like terminal solution?

    Read the article

  • is there such a thing as a ip femtocell, and what does it do?

    - by The Journeyman geek
    My dad mentioned a co-worker suggested using a device, that might use cdma to route calls through IP to save costs on a certain overseas project we're on- since our home base is quite far from there. I've never heard of such a device, so if it does, i'm wondering, if its specific to particular ISPs, or if you can just pick one off the shelf, plug it into an arbitraty internet connection, and make calls using it and a cellphone of some sort? As you can tell, details are sketchy, so... if such a device dosen't exist, saying so might be a right answer ;)

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • Why do some machines respond with many RST packets instead of RST-ACK to refuse a connection?

    - by Michael J. Gray
    I have recently been trying to track down a problem with one of our systems and have noticed that it is simply not allowed to connect to a remote machine. However, the remote machine (not controlled by us) is responding to our request for a connection with many TCP RST packets on a different port (26469, 26497, 26498) than the one we originated on (53). It simply wouldn't let up at one point and flooded us with about 10 packets/second for an hour or two of only RST on those obscure high ports. Out of the thousands of nodes we're connecting to, this is the only one ever to show this behavior. What could possibly cause this? EDIT Below is a screenshot of Wireshark when it happened. I don't have the actual dump anymore and can't reproduce this specific scenario every time. Basically, we sent a SYN and immediately got RST on an odd port and so we respond with RST and just keep going back and forth.

    Read the article

  • Google+ Platform Office Hours for April 25, 2012: Q&A with the Hangouts API Team

    Google+ Platform Office Hours for April 25, 2012: Q&A with the Hangouts API Team This week we were joined by Richard Dunn of the Hangouts API team who answered questions about the Hangouts API. Discuss this video on Google+: goo.gl 1:09 - What's going on with the Hangouts API? 3:43 - Jason shares information about his current projects 5:40 - Can I prevent a Hangout app from running within a Hangout On Air? 8:05 - Can we have APIs to control On Air features? 10:05 - Could a Silverlight / JavaScript bridge be created so we can use them in Hangout Apps? 12:01 - Is there a way to obfuscate the code for a Hangouts app? 15:24 - Are there plans to consolidate the various comment and chat channels for Hangouts On Air? 18:53 - When will Hangouts On Air come to Android? 20:48 - How can I access the OAuth token from the API? - developers.google.com 22:39 - When will we have Hangout apps on the mobile devices? 24:57 - Is it possible to search for 2 or more hash tags via the search REST API? 25:45 - Will we see a PHP REST API demo today? 26:20 - How can I restrict usage of a Hangout app? 30:07 - How do you hold a hangout that is simulcast on YouTube? 31:07 - Why do users show up as empty objects before they've authorized the app? 32:52 - What are the best practice for storing user specific configuration? 38:06 - Is anyone doing in application payment? 39:22 - Has anyone written any books about Hangout apps? From: GoogleDevelopers Views: 1619 19 ratings Time: 42:04 More in Science & Technology

    Read the article

  • nginx deny directory and files to be downloaded

    - by YeppThat'sMe
    gurus. I have a problem and i dont know how to solve it. I am working with Git and Compass/SASS on some projects. Now i want to protect those directories. When i go only to the folder its all fine – i get what i expected a 403 forbidden. location ~ /\.git { deny all; } But when i try use the full path to the config file from git the browser start to download it. Same scenario with compass. There is a config.rb file within the folder which also starts to download it. How can i prevent this behaviour? How can i deny downloading specific files?

    Read the article

  • Best blog package/platform (java, php etc)?

    - by user50912
    Hi Folks, I want to set up a blog, but I want it to reside on a URL I've bought, I also don't want any of the ads and such that sit around other blogs on blog specific sites like blogspot and generally want more control. I was thinking of getting shared hosting with mysql and such to get it going (as opposed to a VM which would be overkill). Then I just need to decide on the easiest quickest (and most secure) way of getting something up there. After some googling, I see b2evolution.net which sits on php, or Apache Roller, which seems to sit on Java. Could anyone offer any advice on whats my best approach here? Are there security concerns with either or has anyone any experience in this area? I really want setup time to be minimal, so I can concentrate of the feel of the blog rather than whats under the hood. Many Thanks.

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • Redirecting 2 or more domains to same hosting server

    - by mtk
    I have domains A.com, A.co.in and A.in Purchased from site X. I have a hosting space/account purchased from site Y, which has provided me with 2 DNS entries that is to be replaced in the account at the site from where I purchase the domains. I have successfully changed the DNS entries of A.com to these 2 DNS entries and I am able to see my index.html page when I hit A.com. Problem On similar lines, I have changed the DNS entries to the same entries for A.co.in and A.in, but on hitting those sites in browser gives me no response and browser specific page of 'Site not found' is been seen. Please let me know, how to set this, so that when I hit any of the domain, the web-site is rendered from the hosting server? What am I doing wrong here? Note It has been more than 3 days after changing the DNS entries, so I don't think so this is a problem of DNS propagation, which I heard from some people. Please provide some detail explanation, as I am very very new to this. This is my first hosting ;) -Thanks

    Read the article

  • How can I restrict the backuppc client user as much as possible? (rsync)

    - by jxn
    I have backuppc making full backups of servers, but I'd like to be sure that my set up is as paranoid as possible. BackupPC is set up to backup via rsync, and it is set up to use a specific user on each client to be backed up. Because the backuppc client user has to have access to every file on the client machine and the ability to ssh into the machine without an interactive password, I'm a little nervous about securing the clients, and I'd like to know I haven't overlooked any options. Here's what I have in place: in the client user's authorized_keys file, i've included from="IPTOSERVER",command="/usr/bin/rsync" before the user's public key, so that the user can only login coming from the BackupPC server. Next, in the sudoers file, I've added this line: backuppc ALL=NOPASSWD: /usr/bin/rsync to allow root-level permissions only for the rsync command for that user. Are there other user, policy, or ssh restrictions that I can add while still allowing the backup pc client user to rsync all files?

    Read the article

  • Custom LightDM sessions to launch an application

    - by zachtib
    I'm trying to set up Ubuntu to act as a kiosk running a custom application, and am trying to get a LightDM session built to automatically start it. Ideally, I'd like to have two sessions available from LightDM. The default would start my application fullscreened, and the other would open a minimal desktop in case any configuration (mostly connecting to a wireless network) needed to be done. I've done a lot of research over the last week on custom LightDM & Gnome sessions. I've got a custom greeter written for LightDM that can start either session, but I can't figure out how to add a specific application to the Gnome session that is ultimately started without just putting a launcher in the global startup directory, and I don't want to do that since I don't want the application starting when they open "configuration mode". Another problem I've run into on my current workaround is that the application doesn't fullscreen properly, which makes me think I'm not starting enough of a gnome session (currently it's just metacity, no panel or anything else). Edit: I found a solution. See http://www.webupd8.org/2011/11/make-applications-autostart-only-in.html

    Read the article

  • Granting access to authzTo attribute

    - by bemace
    I'm trying to grant certain accounts auth access to their authzTo attribute in order to allow proxied authorization. I tried adding this ldif: dn: olcDatabase={-1}frontend,cn=config changetype: modify add: olcAccess olcAccess: {1}to authzTo by dn.children="ou=Special Accounts,dc=example,dc=com" auth - using the command ldapadd -f perm.ldif -D "cn=admin,cn=config" -W but got this error: modifying entry "olcDatabase={-1}frontend,cn=config" ldap_modify: Other (e.g., implementation specific) error (80) additional info: <olcAccess> handler exited with 1 using verbose output and turning up the debug level haven't given me any more clues. Can anyone see what I'm doing wrong?

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Advanced Terminal / Console apps for Mac OS X?

    - by Jakob Egger
    I use a lot of command line programs, very often with similar arguments. Can anyone recommend an application or a workflow that allows me to store often used shell commands and search through my recent commands, using a GUI? I have commands that I use very often (eg. rsync a specific directory to a server) and other commands that I use less often. Creating shell scripts for every code snippet I might reuse seems a bit awkward. Especially for programs that I use seldomly, I end up reading the docs over and over again, because I forgot to write down the exact shell command. Ideally I would like an app that's just like Terminal.app, but provides some kind of history and snippet management. What do you use to keep track of shell commands?

    Read the article

  • Please explain some of the features of URL Rewrite module for a newbie [closed]

    - by kunjaan
    I am learning to use the IIS Rewrite module and some of the "features" listed in the page is confusing me. It would be great if somebody could explain them to me and give a first hand account of when you would use the feature. Thanks a lot! Rewriting within the content of specific HTML tags Access to server variables and HTTP headers Rewriting of server variables and HTTP request headers What are the "server variables" and when would you redefine or define them? Rewriting of HTTP response headers HtmlEncode function Why would you use an HTMLEncode in the server? Reverse proxy rule template Support for IIS kernel-mode and user-mode output caching Failed Request Tracing support

    Read the article

  • IP-restricted port forwarding with iptables

    - by Tom
    For an example, I have two authorized client computers, 1.1.1.1 and 2.1.1.1. My server running iptables is 3.1.1.1 and my firewalled web server is 4.1.1.1. When one of the authorized client IPs connects to 3.1.1.1 on port 80, I would like the connection to be forwarded to 4.1.1.1 on port 8888. If any other IP attempts to connect I would like it to refuse/drop the connection. What iptables config would accomplish this? Is there something more specific out there that would be better suited for this job?

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

  • Preserve the state of the start screen

    - by axrwkr
    What would I need to do in order to cause the start screen to stay the way it was the last time I saw it when I go back to it? I've noticed that every time I leave the start screen it resets back to the beginning, this means that I need to scroll back to where I was to get the same view. I don't want to change the order of the applications on the start screen to accommodate this, I would much prefer it if I could find a way to make the start screen stay the way it was so I can move on from there and also if there was a way to jump to the beginning or the end of the list that would be great. I imagine I can just use the search feature to find a specific program, but that's just an extra step, almost as bad as having to scroll.

    Read the article

  • Mac failing (failed?) hard drive - is all hope lost?

    - by Daniel
    It's a 500 GB Seagate laptop hard drive that came with my Macbook Pro. Apple partition format. Already replaced and now have it external, connected via SATA/USB adapter. Trying to get just a few files that I worked on while out of town when it crashed (and thus did not have my time machine backup drive). Drive will not mount, but OS X Disk Utility detects it and can read the capacity, model number, and even the name of the partition, which leads me to believe all hope may not be lost. Failed attempts so far: Disk Utility verify+repair says drive cannot be repaired and that I should back up immediately (lovely) Disk Warrior says it cannot rebuild the directory due to hardware failure Data Rescue quick & deep scans immediately failed PhotoRec says "error reading sector" for every sector (at least for the few minutes I let it run before closing it to explore other options) What else can I try here? Again, I'm just looking for a few, small files (python scripts to be specific) - not a full recovery.

    Read the article

  • rsync server side limit bandwidth/connection

    - by c2h2
    In a VOIP application, I have upto 3000 clients rsync audio files from there linux server in a daily, server is placed at a data center (10Mbps in/out bound), the server works as a VOIP sip server running FreeSWITCH (low ping latency should be ensured.) Therefore I would like to have server side control of rsync which controls: Limit total outbound bandwidth. Limit total number of connections. (Reject clients while at max number of connection and let it retry after a specific time frame.) OPTIONAL: list/kill individual connections. Normally I would use ssh + rsync + pem_keys with some extra options, but above requirements are not feasible by simple command lines. Can anyone point me some direction. or show some scripts/tools? I would also probably integrate them and release on github. Thanks!

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >