Search Results

Search found 4853 results on 195 pages for 'continuous integration'.

Page 162/195 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Enterprise with eyes on NoSQL

    - by thegreeneman
    Since joining Oracle a few months back, I have had the fortune of being able to interact with a number of large enterprise organizations and discuss their current state of adoption for NoSQL database technology.   It is worth noting that a large percentage of these organizations do have some NoSQL use and have been steadily increasing their understanding of its applicability for certain data management workloads.   Thru those discussions I’ve learned that it seems one of the biggest issues confronting enterprise adoption of NoSQL databases is the lack of standards for access, administration and monitoring.    This was not so much of an issue with the early adopters of NoSQL technology because they employed a highly DevOps centric approach to application deployment leaving a select few highly qualified developers with the task of managing the production of the system that they designed and implemented. However, as NoSQL technology moves out of the startup and into the hands of larger corporate entities, developers with a broad skill set that are capable of both development and I.T. type production management are in short supply and quickly get moved on to do new projects, often moving to different roles within the company.  This difference in the way smaller more agile startups operate as compared to more established companies is revealing a gap in the NoSQL technology segment that needs to get addressed.    This is one of places that a company such as Oracle has a leg up in the NoSQL Database front.  A combination of having gone thru a past database maturization process,  combined with a vast set of corporate relationships that have grown hand in hand to solve these types of issues, Oracle is in a great place to lead the way in closing the requirements gap for NoSQL technology.  Oracle's understanding of the needs specific to mature organizations have already made their way into the Oracle’s NoSQL Database offering with features such as:  One click cluster deployment with visual topology planning,  standards based monitoring protocols such as SNMP, support for data access for reporting via standard SQL  and integration with emerging standards for data access such as MapReduce.  Given the exciting developments we’re driving in the Oracle NoSQL Database group, I will have a lot more to say about this topic as we move into the second half of the year.

    Read the article

  • Showing ZFS some LOVE

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} L is for the way you look at us, and O because we’re Oracle, but V is very, very, extra ordinary, and E, well that’s obvious… E is because Oracle’s new Sun ZFS Storage Appliance is Excellent, and here at OPN, we like spell out the obvious!  If you haven’t already heard, the Sun ZFS Appliance has “A simple, GUI-driven setup and configuration, solid price-performance and world-class Oracle support behind it. The CRN Test Center recommends the Sun ZFS Storage”. Read more about what CRN said here. Oracle's Sun ZFS Appliance family delivers enterprise-class network attached storage (NAS) capabilities with leading Oracle integration, simplicity, efficiency, performance, and TCO.  The systems offer an easy way to manage and expand your storage environment at a lower cost, with more efficiency, better data integrity, and higher performance when compared with competitive NAS offerings. Did we mention that set up, including configuring, will take you less than an hour since it all comes in one box and is so darn simple to use? So if you L-O-V-E what you’re hearing about Oracle’s Sun Z-F-S, learn more by watching the video below, and visiting any of our available resources . It Had to Be You, The OPN Communications Team

    Read the article

  • Social Networks & the Cloud

    - by kellsey.ruppel
    It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another. Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. Learn more how you can use Oracle Social Network to revolutionize how you create, understand, and achieve true value through enterprise social networking. And be sure to check out the follow sessions here at Oracle OpenWorld, where can learn more about Oracle Cloud and Oracle Social Network. Tuesday, Oct 2 – Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups, 1:15pm - 2:15pm, Moscone West – 3001  Wednesday, Oct 3 – Oracle Social Network: Your Strategy for Socially Enabled Oracle Fusion Applications, 11:45am - 12:45pm, Moscone West – 3002/3004

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • Play Framework Plugin for NetBeans IDE

    - by Geertjan
    The start of minimal support for the Play Framework in NetBeans IDE 7.3 Beta would constitute (1) recognizing Play projects, (2) an action to run a Play project, and (3) classpath support. Well, most of that I've created already, as can be seen, e.g., below you can see logical views in the Projects window for Play projects (i.e., I can open all the samples that come with the Play distribution). Right-clicking a Play project lets you run it and, if the embedded browser is selected in the Options window, you can see the result in the IDE. Make a change to your code and refresh the browser, which immediately shows you your changes: What needs to be done, among other things: A wizard for creating new Play projects, i.e., it would use the Play command line to create the application and then open it in the IDE. Integration of everything available on the Play command line. Maybe the logical view, i.e., what is shown in the Projects window, should be changed. Right now, only the folders "app" and "test" are shown there, with everything else accessible in the Files window, as can be seen in the screenshot above. More work on the classpath, i.e., I've hardcoded a few things just to get things to work correctly. Options window extension to register the Play executable, instead of the current hardcoded solution. Scala integrations, i.e., investigate if/how the NetBeans Scala plugin is helpful and, if not, create different/additional solutions. E.g., the HTML templates are partly in Scala, i.e., need to embed Scala support into HTML. Hyperlinking in the "routes" file, as well as special support for the "application.conf" file. Anyone interested, especially if you're a Play fan (a "playboy"?), in joining me in working on this NetBeans plugin? I'll be uploading the sources to a java.net repository soon. It will be here, once it has been made publicly accessible: http://java.net/projects/nbplay/sources/nbplay Kind of cool detail is that the NetBeans plugin is based on Maven, which means that you could use any Maven-supporting IDE to work on this plugin.

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • Fusion Middleware Newsletter - October Edition is Now Out

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} From the latest Oracle Fusion Middleware product releases to the Oracle AppAdvantage use case on Cloud and On-Premise Integration to the very latest in Developer corner and more, the October edition of Oracle Fusion Middleware is chock full of information. Catch the latest edition to learn about the highlights in the latest releases for Oracle GoldenGate 12c and Oracle Data Integrator 12c; and Oracle WebCenter. While there, don’t miss the latest news and upcoming events for Oracle Fusion Middleware and Developers. Find out who we have in the Team Spotlight this edition and watch the latest customer success stories across the portfolio. Did we miss anything? Would you like to hear more about a particular topic? Let us know. Simply drop us a comment and we’d be sure to discuss that in our next editorial meeting. In the meantime, grab a coffee and enjoy the October edition of the newsletter.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Problems setting NTP sever with w32tm for a DC that is a Hyper-V guest

    - by R.Tonheim
    Hello ! I have tried to sett my DC to get its time from several NTP severs. I follow this answer (http://serverfault.com/questions/24298/w32time-sync-problems-for-hyper-v-guests-w32time-event-ids-38-24-29-35/24299#24299) to do it. First I disable Time Synchronization in the Hyper-V Integration Services for each guest. Then restart the Windows Time serviceon the guest. I had before this used this command: w32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update And the cmd sad: The command completed successfully. But the time was still 10 min wrong... I run w32tm again after restarted the DC without it having any effect. The w32tm /query /status still say: "Source: Local CMOS Clock" FROM MY CMD: Microsoft Windows [Version 6.0.6002] Copyright (c) 2006 Microsoft Corporation. All rights reserved. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHGw32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update The command completed successfully. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHG

    Read the article

  • SQL Server Express 2008 R2 Installation error at Windows 7

    - by Shai Sherman
    Hello, I created install script that will install SQL Server 2008 R2 on windows XP SP3, windows vista and windows 7. One of the command that i used in the installation is for silent installation of SQL Server 2008 R2. When i install it on windows XP everything works just fine but when i try to install it on Windows 7 i get an error. What am I doing wrong? Here is the command line that i use: "Setup.exe /ConfigurationFile=Mysetup.ini" Mysetup.ini file: -------------------------------------Start of ini file --------------------------------- ;SQL SERVER 2008 R2 Configuration File ;Version 1.0, 5 May 2010 ; [SQLSERVER2008] ; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will reflect the instance ID of the SQL Server instance. INSTANCEID="MSSQLSERVER" ; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. ACTION="Install" ; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, and Tools. The SQL feature will install the database engine, replication, and full-text. The Tools feature will install Management Tools, Books online, Business Intelligence Development Studio, and other shared components. FEATURES=SQLENGINE ; Displays the command line parameters usage HELP="False" ; Specifies that the detailed Setup log should be piped to the console. INDICATEPROGRESS="False" ; Setup will not display any user interface. QUIET="False" ; Setup will display progress only without any user interaction. QUIETSIMPLE="True" ; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. ;X86="False" ; Specifies the path to the installation media folder where setup.exe is located. ;MEDIASOURCE="z:\" ; Detailed help for command line argument ENU has not been defined yet. ENU="True" ; Parameter that controls the user interface behavior. Valid values are Normal for the full UI, and AutoAdvance for a simplied UI. ; UIMODE="Normal" ; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. ERRORREPORTING="False" ; Specify the root installation directory for native shared components. ;INSTALLSHAREDDIR="D:\Program Files\Microsoft SQL Server" ; Specify the root installation directory for the WOW64 shared components. ;INSTALLSHAREDWOWDIR="D:\Program Files (x86)\Microsoft SQL Server" ; Specify the installation directory. ;INSTANCEDIR="D:\Program Files\Microsoft SQL Server" ; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. SQMREPORTING="False" ; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). INSTANCENAME="SQLEXPRESS" SECURITYMODE=SQL SAPWD=SystemAdmin ; Agent account name AGTSVCACCOUNT="NT AUTHORITY\NETWORK SERVICE" ; Auto-start service after installation. AGTSVCSTARTUPTYPE="Manual" ; Startup type for Integration Services. ;ISSVCSTARTUPTYPE="Automatic" ; Account for Integration Services: Domain\User or system account. ;ISSVCACCOUNT="NT AUTHORITY\NetworkService" ; Controls the service startup type setting after the service has been created. ;ASSVCSTARTUPTYPE="Automatic" ; The collation to be used by Analysis Services. ;ASCOLLATION="Latin1_General_CI_AS" ; The location for the Analysis Services data files. ;ASDATADIR="Data" ; The location for the Analysis Services log files. ;ASLOGDIR="Log" ; The location for the Analysis Services backup files. ;ASBACKUPDIR="Backup" ; The location for the Analysis Services temporary files. ;ASTEMPDIR="Temp" ; The location for the Analysis Services configuration files. ;ASCONFIGDIR="Config" ; Specifies whether or not the MSOLAP provider is allowed to run in process. ;ASPROVIDERMSOLAP="1" ; A port number used to connect to the SharePoint Central Administration web application. ;FARMADMINPORT="0" ; Startup type for the SQL Server service. SQLSVCSTARTUPTYPE="Automatic" ; Level to enable FILESTREAM feature at (0, 1, 2 or 3). FILESTREAMLEVEL="0" ; Set to "1" to enable RANU for SQL Server Express. ENABLERANU="1" ; Specifies a Windows collation or an SQL collation to use for the Database Engine. SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS" ; Account for SQL Server service: Domain\User or system account. SQLSVCACCOUNT="NT Authority\System" ; Default directory for the Database Engine user databases. ;SQLUSERDBDIR="K:\Microsoft SQL Server\MSSQL\Data" ; Default directory for the Database Engine user database logs. ;SQLUSERDBLOGDIR="L:\Microsoft SQL Server\MSSQL\Data\Logs" ; Directory for Database Engine TempDB files. ;SQLTEMPDBDIR="T:\Microsoft SQL Server\MSSQL\Data" ; Directory for the Database Engine TempDB log files. ;SQLTEMPDBLOGDIR="T:\Microsoft SQL Server\MSSQL\Data\Logs" ; Provision current user as a Database Engine system administrator for SQL Server 2008 R2 Express. ADDCURRENTUSERASSQLADMIN="True" ; Specify 0 to disable or 1 to enable the TCP/IP protocol. TCPENABLED="1" ; Specify 0 to disable or 1 to enable the Named Pipes protocol. NPENABLED="0" ; Startup type for Browser Service. BROWSERSVCSTARTUPTYPE="Automatic" ; Specifies how the startup mode of the report server NT service. When ; Manual - Service startup is manual mode (default). ; Automatic - Service startup is automatic mode. ; Disabled - Service is disabled ;RSSVCSTARTUPTYPE="Automatic" ; Specifies which mode report server is installed in. ; Default value: “FilesOnly” ;RSINSTALLMODE="FilesOnlyMode" ; Accept SQL Server 2008 R2 license terms IACCEPTSQLSERVERLICENSETERMS="TRUE" ;setup.exe /CONFIGURATIONFILE=Mysetup.ini /INDICATEPROGRESS --------------------------- End of ini file ------------------------------------- And i get this error: 2010-08-31 18:05:53 Slp: Error result: -2068119551 2010-08-31 18:05:53 Slp: Result facility code: 1211 2010-08-31 18:05:53 Slp: Result error code: 1 2010-08-31 18:05:53 Slp: Sco: Attempting to create base registry key HKEY_LOCAL_MACHINE, machine 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey Software\Microsoft\PCHealth\ErrorReporting\DW\Installed 2010-08-31 18:05:53 Slp: Sco: Attempting to get registry value DW0200 2010-08-31 18:05:53 Slp: Submitted 1 of 1 failures to the Watson data repository What the meaning of this? What do i need to do to fix that problem? Here is the Summary file: Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2010-08-31 18:03:44 End time: 2010-08-31 18:05:51 Requested action: Install Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.1600.1%26EvtType%3d0x6121810A%400xC24842DB Machine Properties: Machine name: NVR Machine processor count: 2 OS version: Windows 7 OS service pack: OS region: United States OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: C:\Disk1\setupsql\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE AGTSVCPASSWORD: * AGTSVCSTARTUPTYPE: Disabled ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: ASSVCPASSWORD: * ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Automatic CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini CUSOURCE: ENABLERANU: True ENU: True ERRORREPORTING: False FARMACCOUNT: FARMADMINPORT: 0 FARMPASSWORD: * FEATURES: SQLENGINE FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: FTSVCACCOUNT: FTSVCPASSWORD: * HELP: False IACCEPTSQLSERVERLICENSETERMS: True INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSQLDATADIR: INSTANCEDIR: C:\Program Files\Microsoft SQL Server\ INSTANCEID: MSSQLSERVER INSTANCENAME: SQLEXPRESS ISSVCACCOUNT: NT AUTHORITY\NetworkService ISSVCPASSWORD: * ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: * PCUSOURCE: PID: * QUIET: False QUIETSIMPLE: True ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: * RSSVCSTARTUPTYPE: Automatic SAPWD: * SECURITYMODE: SQL SQLBACKUPDIR: SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT Authority\System SQLSVCPASSWORD: * SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SQLTEMPDBDIR: SQLTEMPDBLOGDIR: SQLUSERDBDIR: SQLUSERDBLOGDIR: SQMREPORTING: False TCPENABLED: 1 UIMODE: AutoAdvance X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0x0A2FBD17@1211@1 Configuration error description: The process cannot access the file because it is being used by another process. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\SystemConfigurationCheck_Report.htm What should I do and why does this problem occur? Thanks , Shai.

    Read the article

  • Reverse SSH Tunnel

    - by chris
    I am trying to forward web traffic from a remote server to my local machine in order to test out some API integration (tropo, paypal, etc). Basically, I'm trying to setup something similar to what tunnlr.com provides. I've initiated the ssh tunnel with the command $ssh –nNT –R :7777:localhost:5000 user@server Then I can see that server has is now listening on port 7777 with user@server:$netstat -ant | grep 7777 tcp 0 0 127.0.0.1:7777 0.0.0.0:* LISTEN tcp6 0 0 ::1:7777 :::* LISTEN $user@server:curl localhost:7777 Hello from local machine So that works fine. The curl request is actually served from the local machine. Now, how do I enable server.com:8888 to be routed through that tunnel? I've tried using nginx like so: upstream tunnel { server 0.0.0.0:7777; } server { listen 8888; server_name server.com; location / { access_log /var/log/nginx/tunnel-access.log; error_log /var/log/nginx/tunnel-error.log; proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } From the nginx error log I see: [error] 11389#0: *1 connect() failed (111: Connection refused) I've been looking at trying to use iptables, but haven't made any progress. iptables seems like a more elegant solution than running nginx just for tunneling. Any help is greatly appreciated. Thanks!

    Read the article

  • Using the RST3 plugin in the Leo Outliner

    - by T-Boy
    I'm currently trying out the Leo Outliner, and I've heard quite a bit about the RST3 plugin that it has. I'm not planning to use Leo to program just yet -- at this point I'm wondering if it might be useful for generating HTML and PDF documents, as I'm quite currently enamored with RST and how it works. I'm using my Ubuntu Netbook Remix netbook (running 9.10, I believe). I think I've got it down pat, more or less -- I've installed docutils using the Synaptics Package Manager, and I think I've gotten SilverCity installed, as per the requirements -- I've downloaded the archive, and then run "sudo python setup.py install" with no errors. The thing is, I'm not exactly sure how to invoke the rst3 plugin itself. It doesn't appear in the Plugins menu for Leo right now, and the documentation I've managed to source doesn't seem to clearly explain how to use the plugin. Has anyone had any experience in using the rst3 plugin in Leo? It's a little confusing right now, and searches on Google doesn't seem to be helping much any more. I'm using the latest 4.7.1 final version of Leo from the Synaptics Package Manager (was informed that this would have offered the best integration with UNR, so I figured, what the heck). Have I missed out on any steps here, and are there any useful tutorials on how to use the rst3 plugin?

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

  • Windows Server 2008 R2 SP1 Drivers Missing Would Not Install?

    - by bleepzter
    I am building my dev machine with WS2008R2SP1. I consider Windows Server as the best development environment for C# since I tend to focus on server-centric (integration) applications. The desktop flavor of Windows (7) doesn't play nicely with things like Commerce Server or BizTalk... so I to like stick to the environment I develop for. Previously I used to develop inside of VM's but I've found that it is super inefficient and tends to take a toll on the laptops. (I've gone through two of them in 6 months). Problem is that I multiple devices that do not want to be recognized by Windows: My machine is Dell Precision M4500: Intel Core i7-Q740, 1TB HDD, 8GB RAM, Dell re-branded Broadcom DW1501 802x11n Half-Mini Card, Dell re-branded Broadcom DW375 Bluetooh Module, Intel 82577LM Gigabit Network connection NVidia Quadro FX1800 Graphics The devices in question are the Dell rebranded broadcom network and bluetooth adapters: Broadcom USH: USB\VID_0A5C&PID_5800&REV_0101&MI_00 USB\VID_0A5C&PID_5800&MI_00 DW375 Bluetooth Module USB\VID_413C&PID_8187&REV_0517 USB\VID_413C&PID_8187 When I ran the broadcom installers I get "Operating System not supported" which I think is a big oversight on Broadcom's part. Why check for system version string? UGHGHGH Moreover if I try to manually force the driver in windows... I get an error: Driver Management concluded the process to install driver FileRepository\btwampsecfl.inf_amd64_neutral_d8fc2b85d035ed47\btwampsecfl.inf for Device Instance ID USB\VID_0A5C&PID_5800&MI_00\7&66DE6C9&0&0000 with the following status: 0xe0000217. '- or - Driver Management concluded the process to install driver FileRepository\btwampfl.inf_amd64_neutral_d4c4acf036c61299\btwampfl.inf for Device Instance ID USB\VID_413C&PID_8187\90004EEEF5A6 with the following status: 0xe0000217. I googled the 0xe0000217 error code and it says Bad service install section in the driver inf file... Any ideas on how to fix this? I also tried the post by BetaIQ on MSDN Forum, unfortunately the links to the driver package included in the post were dead :( PS. On a side note I also do mobile development for Android, iOS, and Windows Phone, and BB. Having the bluetooth is quite useful with mobile devices.

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • LAN->LAN IP translation (for TortoiseSVN + Artifacts + Buffalo router)

    - by Armchair Bronco
    Here's my scenario: I've got a VisualSVN server on my main dev box @ home. I'm also using Visual Studio 2010, TortoiseSVN, VisualSVN client (for source control), and Versioned 'Artifacts' (for bug tracking). (I had to modify the fake URL's below to use only one slash because as a new user, I can't post more than one real URL.) I've got my Buffalo AirStation WHR-HP-G300N router properly configured so my business partner can connect to the SVN server. I have port forwarding enabled for the internet-side IP address (like http:/99.888.77.66:443) which gets forwarded to an internal IP (like 192.168.11.6). This part is working great. The problem I'm having is with the integration piece between TortoiseSVN and my bug tracking system. I need to provide a bugtraq:url property, but I haven't been able to get relative paths to work. So I'm forced to use an absolute URL. On my end, I need to use the name of my server (for example: bugtraq:url = https:/my-server/svn/bla..), but this doesn't work for my partner. He needs to specify the IP address (for example: bugtraq:url = https:/999.888.77.66:443/svn/bla...) Is there a way to configure my router such that the IP address for this parameter gets re-routed/re-mapped to "https://my-server" if the request originates from the LAN itself? My router's software supports LAN-Internet and Internet-LAN, but I don't see LAN-LAN.

    Read the article

  • Good PC-based video conference control tools?

    - by Jeremy Wadhams
    I'm looking to improve the user experience in our video conference rooms, by simplifying the things users do all the time (setting up a call, muting and unmuting, panning and zooming between a few room-appropriate presets) and totally taking away the functions that we don't use or that are more likely to ruin the experience (changing the brightness of the TV, using the VNC presenter mode). The rooms each have Tandberg Edge or MXP video units, I'm not looking at PC-based solutions like Skype or iChat. The classic way to do this would be to plunk down huge money for an AMX or Crestron panel. This does have some advantages, and I am considering that approach, but in my initial investigation, that looks expensive, inflexible, and proprietary. On the other hand, the Tandberg video units I'm looking to control are very thorough XML API (PDF), so some of the integration magic that Crestron and AMX consider to be a value add, I could reproduce at mashup speeds. Anyone aware of an Open Source or Commercial product that takes advantage of the readily available web APIs, some simple touch screen PCs, and builds a product that is more like skinning an AJAX web app, and ideally more cost effective than the proprietary panels?

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • Fix: Connections to SQL Server 2005 on Windows Vista suddenly stop working

    - by NTulip
    On my Vista machine at work, applications and the SQL Server Management Console work fine connecting to SQL Server 2005. Sometimes they are ok for weeks at a time, sometime for hours and then they stop connecting. I've tried everything to get it to work including the installation of SPII and running the user provisioning tool without any luck. The only way to fix it was to restart. The Error: Connections are refused with the standard error message: Cannot connect to SERVER_NAME\INSTANCE_NAME ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 The Fix: Stop and restart the Sql Server Browser, Sql Server integration, SQL Server Active Directory Helper services. Works like a charm.

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • Matlab computations done over Apple Filing Protocol (AFP) depend on POSIX permissions, ignores ACLs

    - by flumignan
    I'm a system administrator and have never used Matlab, so forgive my general ignorance of the program. My users have encountered problems when executing scripted Matlab actions over AFP to a Mac OS X Server 10.6.7 where the access control list (ACL) should allow actions, but the POSIX-style permissions disallow the activity. It seems as if Matlab, run locally on the Mac workstations on datasets on the remote server, ignores the ACLs entirely. This is the only application I've ever seen behave this way. The server's filesystem is HFS+J and all other activity is performing as expected. These users cannot use CIFS because of our integration with external directory systems. In this example, the directory bxdata, the members of the group cibturner should be able to modify the files. Indeed, they can using any other method except via Matlab scripts. When the Matlab script hits these files, the POSIX permissions of 644 disallow modification. It's as if the ACLs are irrelevant. [root@cib 16:00:24 /14181.2_5sM]# ls -leh@ bxdata/ total 128 -rw-r--r--+ 1 kel32 staff 18K Feb 15 09:31 TS-5sMath030708-21073-1.edat 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown -rw-r--r--+ 1 kel32 staff 25K Feb 15 09:31 TS-5sMath030708-21073-1.txt 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown Because this server has HIPAA data, security is critical. We are not using networked home directories or SAN technology. The MatLab program is run on the user's hard drive; access is granted via Kerberized AFP.

    Read the article

  • Google Maps mashup for notes/househunting

    - by afray
    I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's "my maps" feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next. However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want. I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either. Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search. So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details.

    Read the article

  • What is the collaborative screen shot/diagramming application recently featured on Hacker News and p

    - by wonsungi
    A few days ago, I saw this video for a screen capture application. I'm pretty sure I followed a link from Hacker News, possibly to a Life Hacker article. The video was very short, but demonstrated how the application could be used: The application was basically a movable/resize-able view port with a button. When the button is pressed, the contents of the view port are saved to an image (basically a screen capture.) The interesting thing is what you could do after that point. One of the specific examples from the video browsed to Google maps street view, grabbed a photo of an intersection, then scribbled notes about where to meet and where the restaurant was in colored "marker." Another example shown was grabbing a house layout from from CAD tool, then scribbling notes on it. The last part of the video showed several possible uses being scrolled through the application's view port. Now, it seemed it was very easy to share these images with other people because there was some type of integration, either with their own site and/or common social websites/chat services. The application was shown running on both Windows and Mac. edit: I think there was an iPhone app, as well. Anyone know what this application is? I tried searching Google, Hacker News, and Life Hacker already. It is not Jing.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >