Search Results

Search found 68155 results on 2727 pages for 'data security'.

Page 90/2727 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Connect to QuickBooks from PowerBuilder using RSSBus ADO.NET Data Provider

    - by dataintegration
    The RSSBus ADO.NET providers are easy-to-use, standards based controls that can be used from any platform or development technology that supports Microsoft .NET, including Sybase PowerBuilder. In this article we show how to use the RSSBus ADO.NET Provider for QuickBooks in PowerBuilder. A similar approach can be used from PowerBuilder with other RSSBus ADO.NET Data Providers to access data from Salesforce, SharePoint, Dynamics CRM, Google, OData, etc. In this article we will show how to create a basic PowerBuilder application that performs CRUD operations using the RSSBus ADO.NET Provider for QuickBooks. Step 1: Open PowerBuilder and create a new WPF Window Application solution. Step 2: Add all the Visual Controls needed for the connection properties. Step 3: Add the DataGrid control from the .NET controls. Step 4:Configure the columns of the DataGrid control as shown below. The column bindings will depend on the table. <DataGrid AutoGenerateColumns="False" Margin="13,249,12,14" Name="datagrid1" TabIndex="70" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn x:Name="idColumn" Binding="{Binding Path=ID}" Header="ID" Width="SizeToHeader" /> <DataGridTextColumn x:Name="nameColumn" Binding="{Binding Path=Name}" Header="Name" Width="SizeToHeader" /> ... </DataGrid.Columns> </DataGrid> Step 5:Add a reference to the RSSBus ADO.NET Provider for QuickBooks assembly. Step 6:Optional: Set the QBXML Version to 6. Some of the tables in QuickBooks require a later version of QuickBooks to support updates and deletes. Please check the help for details. Connect the DataGrid: Once the visual elements have been configured, developers can use standard ADO.NET objects like Connection, Command, and DataAdapter to populate a DataTable with the results of a SQL query: System.Data.RSSBus.QuickBooks.QuickBooksConnection conn conn = create System.Data.RSSBus.QuickBooks.QuickBooksConnection(connectionString) System.Data.RSSBus.QuickBooks.QuickBooksCommand comm comm = create System.Data.RSSBus.QuickBooks.QuickBooksCommand(command, conn) System.Data.DataTable table table = create System.Data.DataTable System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter dataAdapter dataAdapter = create System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter(comm) dataAdapter.Fill(table) datagrid1.ItemsSource=table.DefaultView The code above can be used to bind data from any query (set this in command), to the DataGrid. The DataGrid should have the same columns as those returned from the SELECT statement. PowerBuilder Sample Project The included sample project includes the steps outlined in this article. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Securely expose WebService from Enterprise Network to Internet Client

    - by hotzen
    Are there any standards (or certified solutions) to expose a (Web-)Service to the internet from a very security-sensitive network (e.g. Banking/Finance)? I am not specifically talking about WS-* or any other transport-layer security á la SSL/TLS, rather about important standards or certifications that must be obeyed. Are there any known products (coming from an SAP-environment) that can provide a "high-security proxy" of some sort to expose specific web-services to the internet? Any buzzwords that a CIO/CTO is aware of about this subject?

    Read the article

  • New Oracle Information Rights Management release (11.1.1.3)

    - by Simon Thorpe
    Just released is the latest version of the market leading document security technology from Oracle. Oracle IRM 11g is the result of over 12 years of development and innovation to allow customers to provide persistent security to their most confidential documents and emails. This latest release continues our refinement of the technology and features the following; Continued improvements to the web based Oracle IRM Management Website New features in the out of the box classification model New Java APIs improving application integration support Support for DB2 as the IRM database. Over the coming months we will see more releases from this technology as we improve format support, platform support and continue the strategy to for Oracle IRM as the most secure, scalable and usable document security solution in the market. Want to learn more about Oracle IRM? View our video presentation and demonstration or try using it for your self via our simple online self service demo. Keep up to date on Oracle via this blog or on our Twitter, YouTube and Facebook pages.

    Read the article

  • How long for data highlighter mark up to appear in structured data tool?

    - by Max
    I used the data highlighter in webmaster tools over 3 weeks ago to mark up some local business data, but there is still no structured data being detected in webmaster tools. Does any body have any experience on approx how long it takes for Google Webmaster Tools to start reporting Structured Data that has been marked up with their data highlighter? I'm asking specifically about reporting on it in Web Master Tools Structured Data section, as opposed to actually appearing in the SERPs.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • Endpoint Security: How to Protect Data on a Laptop

    <b>Small Business Computing:</b> "But the pain of buying a new computer pales in the face of losing the data from an unprotected laptop. A few simple steps toward data protection can avoid an invasion of your privacy and the real likelihood of identity theft."

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Is it a bad idea to run SELinux and AppArmor at the same time?

    - by jgbelacqua
    My corporate policy says that Linux boxes must be secured with SELinux (so that a security auditor can check the 'yes, we're extremely secure!' checkbox for each server). I had hoped to take advantage of Ubuntu's awesome default AppArmor security. Is it unwise to run both Apparmor and SELinux? (If so, can this bad idea be mitigated with some apparmor and/or selinux tweaks?) Update 1/28 -- Kees Cook has pointed out in his answer the dead simple reason why it's a bad idea to run both -- the Linux kernel says you can't1. [ 1 More precisely, the Linux Security Modules interface framework is designed for a single running implementation, and does not support more than a single running implementation. ] Update 1/27 -- I've accepted the answer from kenny.r , though I would be happier with some more technical reasons of why this would fail, or examples of actual conflicts that this would cause.

    Read the article

  • Microsoft Issues Security Guidelines for Windows Azure

    New software development lifecycle outlines how to address security threats in the cloud....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft Issues Security Advisory on Windows Aero

    Microsoft released a security advisory on Tuesday concerning a Windows component involved with desktop graphics display....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >