Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 413/1286 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Oracle WebCenter - Well Connected

    - by Brian Dirking
    800x600 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} An good post from Dan Elam on the state of the ECM industry (http://www.aiim.org/community/blogs/community/ECM-Vendors-go-to-War) . For those of you who don’t know Dan, he is one of the major forces in the content management industry. He founded eVisory and IMERGE Consulting, he is an AIIM Fellow and a former US Technical Expert to the International Standards Organization (ISO), and has been a driving force behind EmTag, AIIM’s Emerging Technologies Group. His post is interesting – it starts out talking about our Moveoff Documentum campaign, but then it becomes a much deeper insight into the ECM industry. Dan points out that Oracle has been making quiet strides in the ECM industry. In fact, analysts share this view Oracle, pointing out Oracle is growing greater than 20% annually while many of the big vendors are shrinking. And as Dan points out, this cements Oracle as one of the big five in the ECM space – the same week that Autonomy was removed from the Gartner Magic Quadrant for ECM. One of the key things points out is that Oracle WebCenter is well connected. WebCenter has out-of-the-box connections to key enterprise applications such as E-Business Suite, PeopleSoft, Siebel and JD Edwards. Those out-of-the-box integrations make it easy for organizations to drive content right into the places where it is needed, in the midst of business processes. At the same time, WebCenter provides composite interface capabilities to bring together two or more of these enterprise applications onto the same screen. Combine that with the capabilities of Oracle Social Network, you start to see how Oracle is providing a full platform for user engagement. But beyond those connections, WebCenter can also connect to other content management systems. It can index and search those systems from a single point of search, bringing back results in a single combined hitlist. WebCenter can also extend records management capabilities into Documentum, SharePoint, and email archiving systems. From a single console, records managers can define a series, set a retention schedule, and place holds – without having to go to each system to make these updates. Dan points out that there are some new competitive dynamics – to be sure. And it is interesting when a system can interact with another system, enforce dispositions and holds, and enable users to search and retrieve content. Oracle WebCenter is providing the infrastructure to build on, and the interfaces to drive user engagement. It’s an interesting time.

    Read the article

  • Steps for MySQL DB Replication

    - by Manish Agrawal
    Following are the steps for MySQL Replication implementation on Linux machine: Pre-implementation steps for DB Replication:   1.    Identify the databases to be replicated 2.    Identify the tables to be ignored during replication per database for example log tables 3.  Carefully identify and replace the variables and paths(locations) mentioned (in bold) in the commands given below with appropriate values 4.  Schedule the maintenance activity in odd hours as these activities will affect all the databases on Master database server       Implementation steps for DB Replication:     1.    Configure the /etc/my.cnf file on Master database server to enable Binary logging, setting of server id and configuring of dbnames for which logging should be done. [mysqld] log-bin=mysql-bin server-id=1 binlog-do-db = dbname   Note: You can specify multiple DB in binlog-do-db by using comma separated dbname values like: dbname1, dbname2, …, dbnameN   2.    On Master database, Grant Replication Slave Privileges, by executing following command on mysql prompt mysql> GRANT REPLICATION SLAVE ON *.* TO slaveuser@<hostname> identified by ‘slavepassword’;   3.    Stop the Master & Slave database by giving the command      mysqladmin shutdown   4.    Start the Master database by giving the command      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user&     5.    mysql> FLUSH TABLES WITH READ LOCK; Note: Leave the client (putty session) from which you issued the FLUSH TABLES statement running, so that the read lock remains in effect. If you exit the client, the lock is released. 6.    mysql > SHOW MASTER STATUS;          +---------------+----------+--------------+------------------+          | File          | Position | Binlog_Do_DB | Binlog_Ignore_DB |          +---------------+----------+--------------+------------------+          | mysql-bin.003 | 117       | dbname       |                  |          +---------------+----------+--------------+------------------+ Note: Note this information as this will be required while starting of Slave and replication in later steps   7.    Take MySQL dump by giving the following command, In another session window (putty window) run the following command: mysqldump –u user --ignore-table=dbname.tbl_name -–ignore-table=dbname.tbl_name2 --master-data dbname > dbname_dump.db Note: When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.     8.    Unlock the tables on Master by giving following command: mysql> UNLOCK TABLES;   9.    Copy the dump file to Slave DB server   10.  Startup the Slave by using option --skip-slave      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --skip-slave&   11.  Restore the dump file on Slave DB server      mysql –u user dbname < dbname_dump.db   12.  Stop the Slave database by giving the command      mysqladmin shutdown   13.  Configure the /etc/my.cnf file on the Slave database server [mysqld] server-id=2 replicate-ignore-table = dbname.tablename   14.  Start the Slave Mysql Server with 'replicate-do-db=DB name' option.      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --replicate-do-db=dbname --skip-slave   15.  Configure the settings at Slave server for Master host name, log filename and position within the log file as shown in Step 6 above Use Change Master statement in the MySQL session mysql> CHANGE MASTER TO MASTER_HOST='<master_host_name>', MASTER_USER='<replication_user_name>', MASTER_PASSWORD='<replication_password>', MASTER_LOG_FILE='<recorded_log_file_name>', MASTER_LOG_POS=<recorded_log_position>;   16.  On Slave Servers mysql prompt give the following command: a.     mysql > START SLAVE; b.    mysql > SHOW SLAVE STATUS;         Note: To stop slave for backup or any other activity you can use the following command on the Slave Servers mysql prompt: mysql> STOP SLAVE     Refer following links for more information on MySQL DB Replication: http://dev.mysql.com/doc/refman/5.0/en/replication-options.html http://crazytoon.com/2008/04/21/mysql-replication-replicate-by-choice/ http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Integrating with Fusion Applications using SOAP web services and REST APIs (Part 1 of 2) by Arvind Srinivasamoorthy

    - by JuergenKress
    Fusion Applications provides several types of interfaces to facilitate integration with other applications within the enterprise and on the cloud.As one of the key integration interfaces, Fusion Applications (FA) supports SOAP services based integration, both inbound and outbound. At this point FA doesn’t provide REST API’s but it is planned for a future release. It is however possible to invoke external REST APIs from FA which we will discuss. Oracle continues to invest in improving both SOAP and REST based connectivity. The content in this blog is based on features that were available at the time of writing it. In this two part blog, I will cover the following topics briefly. Invoking FA SOAP web services from external applications Identifying the FA SOAP web service to be invoked Sample invocation from an external application Techniques to invoke FA services from an ADF application Invoking external SOAP Web Services from FA (covered in Part 2) Invoking external REST APIs from FA (covered in Part 2) I’ll touch upon some basics, so that you can quickly build a few SOAP/REST interactions with FA. If you do not already have access to an FA instance (on-premise or SaaS), you can request for a free 30 day trial of the Oracle Sales Cloud using http://cloud.oracle.com 1. Invoking FA SOAP web services from external applications There are two main types of services that FA exposes -  ADF Services - These services allow you to perform CRUD operations on Fusion business objects. For example, Sales Party Service, Opportunity Service etc. Using these services you can typically perform operations such as get, find, create, delete, update etc on FA objects.These services are typically useful for UI driven integrations such as looking up FA information from external application UIs, using third party Interfaces to create/update data in FA. They are also used in non-UI driven integration uses cases such as initial upload of business or setup data, synchronizing data with an external systems, etc. - Composite Services – These services involve more logic than CRUD and often involving human workflows, rules etc. These services perform a business function such as Get Orchestration Order Service and are used when building larger process based integrations with external systems.These services are usually asynchronous in nature and are not typically used for UI integration patterns. 1a. Identifying the FA SOAP web service to be invoked All FA web service metadata is available through an OER instance (Oracle Enterprise Repository) which is publicly available via http://fusionappsoer.oracle.com. This is the starting point for you to discover the services that you are going to work with. You do not need to own a FA account to browse the services using the above UI You can use the search area on the left to narrow down your search to what you are looking for. For example, you can choose the type as by ADF Services or Composite, you can narrow your search to a specific FA version, Product Family etc. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,Arvind Srinivasamoorthy

    Read the article

  • How to detect UTF-8-based encoded strings [closed]

    - by Diego Sendra
    A customer of asked us to build him a multi-language based support VB6 scraper, for which we had the need to detect UTF-8 based encoded strings to decode it later for proper displaying in application UI. It's necessary to point out that this need arises based on VB6 limitations to natively support UTF-8 in its controls, contrary to what it happens in .NET where you can tell a control that it should expect UTF-8 encoding. VB6 natively supports ISO 8859-1 and/or Windows-1252 encodings only, for which textboxes, dropdowns, listview controls, others can't be defined to natively support/expect UTF-8 as you can do in .NET considering what we just explained; so we would see weird symbols such as é, è among others, making it a whole mess at the time of displaying. So, next function contains whole UTF-8 encoded punctuation marks and symbols from languages like Spanish, Italian, German, Portuguese, French and others, based on an excellent UTF-8 based list we got from this link - Ref. http://home.telfort.nl/~t876506/utf8tbl.html Basically, the function compares if each and one of the listed UTF-8 encoded sentences, separated by | (pipe) are found in our passed string making a substring search first. Whether it's not found, it makes an alternative ASCII value based search to get a match. Say, a string like "Societé" (Society in english) would return FALSE through calling isUTF8("Societé") while it would return TRUE when calling isUTF8("SocietÈ") since È is the UTF-8 encoded representation of é. Once you got it TRUE or FALSE, you can decode the string through DecodeUTF8() function for properly displaying it, a function we found somewhere else time ago and also included in this post. Function isUTF8(ByVal ptstr As String) Dim tUTFencoded As String Dim tUTFencodedaux Dim tUTFencodedASCII As String Dim ptstrASCII As String Dim iaux, iaux2 As Integer Dim ffound As Boolean ffound = False ptstrASCII = "" For iaux = 1 To Len(ptstr) ptstrASCII = ptstrASCII & Asc(Mid(ptstr, iaux, 1)) & "|" Next tUTFencoded = "Ä|Ã…|Ç|É|Ñ|Ö|ÃŒ|á|Ã|â|ä|ã|Ã¥|ç|é|è|ê|ë|í|ì|î|ï|ñ|ó|ò|ô|ö|õ|ú|ù|û|ü|â€|°|¢|£|§|•|¶|ß|®|©|â„¢|´|¨|â‰|Æ|Ø|∞|±|≤|≥|Â¥|µ|∂|∑|âˆ|Ï€|∫|ª|º|Ω|æ|ø|¿|¡|¬|√|Æ’|≈|∆|«|»|…|Â|À|Ã|Õ|Å’|Å“|–|—|“|â€|‘|’|÷|â—Š|ÿ|Ÿ|â„|€|‹|›|ï¬|fl|‡|·|‚|„|‰|Â|Ú|Ã|Ë|È|Ã|ÃŽ|Ã|ÃŒ|Ó|Ô||Ã’|Ú|Û|Ù|ı|ˆ|Ëœ|¯|˘|Ë™|Ëš|¸|Ë|Ë›|ˇ" & _ "Å|Å¡|¦|²|³|¹|¼|½|¾|Ã|×|Ã|Þ|ð|ý|þ" & _ "â‰|∞|≤|≥|∂|∑|âˆ|Ï€|∫|Ω|√|≈|∆|â—Š|â„|ï¬|fl||ı|˘|Ë™|Ëš|Ë|Ë›|ˇ" tUTFencodedaux = Split(tUTFencoded, "|") If UBound(tUTFencodedaux) > 0 Then iaux = 0 Do While Not ffound And Not iaux > UBound(tUTFencodedaux) If InStr(1, ptstr, tUTFencodedaux(iaux), vbTextCompare) > 0 Then ffound = True End If If Not ffound Then 'ASCII numeric search tUTFencodedASCII = "" For iaux2 = 1 To Len(tUTFencodedaux(iaux)) 'gets ASCII numeric sequence tUTFencodedASCII = tUTFencodedASCII & Asc(Mid(tUTFencodedaux(iaux), iaux2, 1)) & "|" Next 'tUTFencodedASCII = Left(tUTFencodedASCII, Len(tUTFencodedASCII) - 1) 'compares numeric sequences If InStr(1, ptstrASCII, tUTFencodedASCII) > 0 Then ffound = True End If End If iaux = iaux + 1 Loop End If isUTF8 = ffound End Function Function DecodeUTF8(s) Dim i Dim c Dim n s = s & " " i = 1 Do While i <= Len(s) c = Asc(Mid(s, i, 1)) If c And &H80 Then n = 1 Do While i + n < Len(s) If (Asc(Mid(s, i + n, 1)) And &HC0) <> &H80 Then Exit Do End If n = n + 1 Loop If n = 2 And ((c And &HE0) = &HC0) Then c = Asc(Mid(s, i + 1, 1)) + &H40 * (c And &H1) Else c = 191 End If s = Left(s, i - 1) + Chr(c) + Mid(s, i + n) End If i = i + 1 Loop DecodeUTF8 = s End Function

    Read the article

  • How to Get Windows 7 Theme Wallpapers Without Installing Them

    - by Mysticgeek
    Are you using an older version of Windows but like the Windows 7 theme wallpapers? What if you have Windows 7 but you don’t want to install the themes just to get the wallpapers? Here is how to get them without having to install themes. This guest article was written by Ryan Dozier from the Doztech tech blog. Getting the Wallpaper on XP, Vista, or Windows 7 First download and install 7-zip on your machine (link below). After you’ve installed 7-zip, download a Windows 7 theme (link below) and right-click on the theme, select 7-Zip, and Extract to “Theme Name”… A new folder will appear with the theme name on it. When you open it, there will be a folder called DesktopBackground or something similar.   Open the folder to get the wallpapers to view the wallpapers for the theme. You can delete the extra files and just keep the wallpapers!   Getting the Wallpaper on Ubuntu Extracting the wallpaper on Ubuntu can be a little tricky. Just follow these steps and you will be able to do it. First go to the Ubuntu Software Center under the Applications menu. Search for 7zip and click on the arrow to go to the applications menu. Find the Install button and click it. It will take a couple of minutes for 7zip to install. After 7zip installs, close the Ubuntu Software Center and download a Windows 7 theme. Store it somewhere you can access it quickly. Right-click on the theme and select Rename and get rid of the themepack extension and replace it with zip. The file should be “Theme Name.zip” after you rename it. Right-click on the theme and click Extract Here. After  the extracting you will have a new folder with the theme name. Open it and go into the DesktopBackground folder to get the wallpapers. You can delete the extra files and just keep the wallpapers. If you want to get the new Windows 7 Themes Wallpapers, but don’t want to search and install them separately, this is a nice workaround. Links Get 7 zip for Windows  here Get Windows 7 Themes here Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • Love and Hate Outlook autocomplete, Outlook 2010/Exchange 2010

    - by Kay Sellenrode
    I think that almost every Exchange admin can concur with me that the Outlook autocomplete cache is one of those things you love but at the same time also hate. Users mostly love this function, except when it fails.Luckily since Outlook 2010 things got a little better and we got rid of the dreaded nk2 files.Outlook 2010 now includes a folder named "Suggested Contacts", all users you send an email to and that don't already have an contact object are saved in this suggested contacts folder.A lot of people thought this folder is also the source for the autocomplete cache, which would make it somewhat easy to manage, I wish the solution was that easy.Badly enough separate from the suggested contacts, outlook still maintains a cache for the autocomplete function. Let us say you run in to the following situation: John works for company A and is a popular contact for almost everyone in your organization.Now John quit his job at Company A and moved to Company B.Luckily John maintains your company as customer, but his email address is now changed from companyA.com to companyB.comSince you don't want to do any business with Company A anymore, you want to make sure none of your users accidentally mail to his old address.Now this is where the real fun starts, cause almost all of your 1000 users have mailed at least once with John.Resulting in the fact that every user has John most probably listed in their autocomplete cache.  I have run into sort like situations multiple times with several customers, which is always a pain.And of course this blog post is the result of one of those issues once again.I knew that with the Suggested contacts we could do more than previously, but still never spent time on it before.But today I thought lets nail this now and forever!!  Ok let's start of that things are different for every combination of outlook and exchange.I explain the procedure for Exchange 2010 SP1+ in combination with Outlook 2010.At first we want to get rid of all contact objects that contain [email protected] do this we need to be assigned to the RBAC role "Mailbox Import Export", which can be done through the Exchange Control panel.In my test environment I assigned this role to the Organization admins, but in real life you might want to add it to a custom role. Open the Exchange control panel by logging in to the ecp url, in my case https://ITFEX.itf.local/ECP, and make sure you selected your organization as management scope.Browse to Roles & Auditing, and open the properties for the organization management role group.click on the Add button to add a new role to the Organization Management role group, select the Mailbox Import Export role and click on add and OK to add it to the role.  Once you have assigned that role to your account you can open the Exchange Management Shell and execute the following command: Get-mailbox –resultsize unlimited | search-mailbox –targetmailbox "your.account" –targetfolder searchanddelete –loglevel full –logonly –searchquery "kind:contact AND [email protected]" This command will create a list with all mailboxes and any contacts that were found with an email address that contains [email protected], this list is then posted in the mailbox you specified at your.account in the folder searchanddelete.Now examine the report that was created and posted in the mailbox to see if it matches what you think it should match.My results looked like this:  When you're confident that the search includes all references and no false positives you can execute almost the same command, but this time with an delete action instead of the logonly. Get-mailbox –resultsize unlimited | search-mailbox –targetmailbox "your.account" –targetfolder searchanddelete –loglevel full –DeleteContent –searchquery "kind:contact AND [email protected]" Now most people would think this would remove the contact object from the suggested contacts, resulting in a removal from the autocomplete list.Sad but not true, to clean up the autocomplete list start Outlook with the command: "outlook /cleanautocompletecache" This will result in an empty cache, but luckily this is rebuild based on the suggested contacts, which now doesn't include the [email protected] contact anymore.

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • SSIS - Connect to Oracle on a 64-bit machine (Updated for SSIS 2008 R2)

    - by jorg
    We recently had a few customers where a connection to Oracle on a 64 bit machine was necessary. A quick search on the internet showed that this could be a big problem. I found all kind of blog and forum posts of developers complaining about this. A lot of developers will recognize the following error message: Test connection failed because of an error in initializing provider. Oracle client and networking components were not found. These components are supplied by Oracle Corporation and are part of the Oracle Version 7.3.3 or later client software installation. Provider is unable to function until these components are installed. After a lot of searching, trying and debugging I think I found the right way to do it! Problems Because BIDS is a 32 bit application, as well on 32 as on 64 bit machines, it cannot see the 64 bit driver for Oracle. Because of this, connecting to Oracle from BIDS on a 64 bit machine will never work when you install the 64 bit Oracle client. Another problem is the "Microsoft Provider for Oracle", this driver only exists in a 32 bit version and Microsoft has no plans to create a 64 bit one in the near future. The last problem I know of is in the Oracle client itself, it seems that a connection will never work with the instant client, so always use the full client. There are also a lot of problems with the 10G client, one of it is the fact that this driver can't handle the "(x86)" in the path of SQL Server. So using the 10G client is no option! Solution Download the Oracle 11G full client. Install the 32 AND the 64 bit version of the 11G full client (Installation Type: Administrator) and reboot the server afterwards. The 32 bit version is needed for development from BIDS with is 32 bit, the 64 bit version is needed for production with the SQLAgent, which is 64 bit. Configure the Oracle clients (both 32 and 64 bits) by editing  the files tnsnames.ora and sqlnet.ora. Try to do this with an Oracle DBA or, even better, let him/her do this. Use the "Oracle provider for OLE DB" from SSIS, don't use the "Microsoft Provider for Oracle" because a 64 bit version of it does not exist. Schedule your packages with the SQLAgent. Background information Visual Studio (BI Dev Studio)is a 32bit application. SQL Server Management Studio is a 32bit application. dtexecui.exe is a 32bit application. dtexec.exe has both 32bit and 64bit versions. There are x64 and x86 versions of the Oracle provider available. SQLAgent is a 64bit process. My advice to BI consultants is to get an Oracle DBA or professional for the installation and configuration of the 2 full clients (32 and 64 bit). Tell the DBA to download the biggest client available, this way you are sure that they pick the right one ;-) Testing if the clients have been installed and configured in the right way can be done with Windows ODBC Data Source Administrator: Start... Programs... Administrative tools... Data Sources (ODBC) ADITIONAL STEPS FOR SSIS 2008 R2 It seems that, unfortunately, some additional steps are necessary for SQL Server 2008 R2 installations: 1. Open REGEDIT (Start… Run… REGEDIT) on the server and search for the following entry (for the 32 bits driver): HKEY_LOCAL_MACHINE\Software\Microsoft\MSDTC\MTxOCI Make sure the following values are entered: 2. Next, search for (for the 64 bits driver): HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\MSDTC\MTxOCI Make sure the same values as above are entered. 3. Reboot your server.

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • How to autostart Kupfer in Ubuntu 13.10?

    - by JJD
    I built Kupfer from source on Ubuntu 13.10 and installed it into ./local. In the preferences I checked Start automatically on login. However, Kupfer does not automatically start. The desktop file in ~/.local/share/applications/kupfer.desktop looks like this: [Desktop Entry] Version=1.0 Name=Kupfer Name[cs]=Kupfer Name[da]=Kupfer Name[de]=Kupfer Name[el]=Kupfer Name[es]=Kupfer Name[eu]=Kupfer Name[fr]=Kupfer Name[gl]=Kupfer Name[hu]=Kupfer Name[it]=Kupfer Name[ko]=?? Name[nb]=Kupfer Name[nl]=Kupfer Name[pl]=Kupfer Name[pt]=Kupfer Name[pt_BR]=Kupfer Name[ru]=Kupfer Name[sl]=Kupfer Name[sv]=Kupfer Name[tr]=Kupfer Name[zh_CN]=Kupfer GenericName=Application Launcher GenericName[ca]=Llançador d'aplicació GenericName[cs]=Spouštec aplikací GenericName[da]=Programopstarter GenericName[de]=Anwendungsstarter GenericName[el]=??????t?? efa?µ???? GenericName[es]=Lanzador de aplicaciones GenericName[eu]=Aplikazioen abiarazlea GenericName[fr]=Lanceur d'applications GenericName[gl]=Iniciador de aplicativos GenericName[hu]=Alkalmazásindító GenericName[it]=Lanciatore di applicazioni GenericName[ko]=?? ???? ?? ??? GenericName[nb]=Programstarter GenericName[nl]=Programmastarter GenericName[pl]=Aktywator programów GenericName[pt]=Lançador de Aplicações GenericName[pt_BR]=Lançador de aplicativos GenericName[ru]=???????? ??????? ?????????? GenericName[sl]=Zaganjalnik programov GenericName[sv]=Programstartare GenericName[tr]=Uygulama Çalistirici GenericName[zh_CN]=????? Comment=Convenient command and access tool for applications and documents Comment[cs]=Nástroj pro pohodlné provádení príkazu a prístup k aplikacím a dokumentum Comment[da]=Nemt kommando- og adgangsværktøj til programmer og dokumenter Comment[de]=Praktisches Befehls- und Zugriffswerkzeug für Anwendungen und Dokumente Comment[el]=?????? e??a?e?? e?t???? ?a? p??sßas?? ??a efa?µ???? ?a? ????afa Comment[es]=Herramienta para acceso y manejo de aplicaciones y documentos Comment[eu]=Komando eta atzipen tresna egokia aplikazio eta dokumentuentzat Comment[fr]=Outil pratique pour accéder à des documents et lancer des applications Comment[gl]=Ferramenta cómoda para controlar e acceder a aplicativos e documentos Comment[hu]=Kényelmes parancs és hozzáférési eszköz az alkalmazásokhoz és dokumentumokhoz Comment[it]=Comodo comando e strumento di accesso per applicazioni e documenti Comment[ko]=???? ???????? ??? ???? ??? ?? ? ?? ?? Comment[nb]=Praktiskt kommandoverktøy for programmer og dokumenter Comment[nl]=Handige opdracht- en toegangshulp voor programma's en documenten Comment[pl]=Wygodne narzedzie do uruchamiania programów i otwierania dokumentów Comment[pt]=Ferramenta conveniente para acesso e gestão de aplicações e documentos Comment[pt_BR]=Uma conveniente ferramenta de comando e acesso para aplicativos e documentos Comment[ru]=??????? ?????????? ??? ???????? ??????? ? ?????????? ? ?????????? Comment[sl]=Prikladno orodje za izvajanje ukazov in dostopa do programov in dokumentov Comment[sv]=Praktiskt kommandoprogram för åtkomst av program och dokument Comment[zh_CN]=???????????????? Icon=kupfer Exec=python /home/user/.local/share/kupfer/kupfer.py %F Type=Application Categories=Utility; StartupNotify=true X-UserData=$CONFIG/kupfer;$DATA/kupfer;$CACHE/kupfer Terminal=false

    Read the article

  • saving and retrieving a text file in java?

    - by user3319432
    import java.sql. ; import java.awt.; import javax.swing.; import java.awt.event.; public class saving extends JFrame implements ActionListener{ JTextField edpno=new JTextField(10); JLabel l0= new JLabel ("EDP Number: "); JComboBox fname = new JComboBox(); JLabel l1= new JLabel("First Name: "); JTextField lname= new JTextField(20); JLabel l2= new JLabel("Last Name: "); // JTextField contno= new JTextField(20); // JLabel l3= new JLabel("Contact Number: "); JComboBox contno = new JComboBox(); JLabel l3 = new JLabel ("Contact Number: "); JButton bOK = new JButton("Save"); JButton bRetrieve = new JButton("Retrieve"); private ImageIcon icon; JPanel C=new JPanel(){ protected void paintComponent(Graphics g){ g.drawImage(icon.getImage(),0,0,null); super.paintComponent(g); } }; public Search Record (){ icon=new ImageIcon("images/canres.png"); C.setOpaque(false); C.setLayout(new GridLayout(5,2,4,4)); setTitle("Search Record"); C.add (l0); C.add (edpno); edpno.addActionListener(this); C.add (l1); C.add (fname); fname.setForeground(Color.BLUE); fname.setFont(new Font(" ", Font.BOLD,15)); C.add (l2); C.add (lname); C.add (l3); C.add (contno); contno.setForeground(Color.BLUE); contno.setFont(new Font(" ", Font.BOLD,15)); C.add(bOK); bOK.addActionListener(this); C.add (bRetrieve); bRetrieve.addActionListener(this); bOK.setBackground(Color.white); bRetrieve.setBackground(Color.white); } public void saverecord(){ try{ //Connect to the Database Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path ="jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUserName =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); s.executeQuery("select firstname, Lastname, contact number from name WHERE edpno ='"+edpno.getText()+"'"); ResultSet rs = s.getResultSet(); ResultSetMetaData md = rs.getMetaData(); while(rs.next()) { fname.setSelectedItem(rs.getString(1)); lname.setText(rs.getString(2)); contno.setSelectedItem(rs.getString(3)); // crs.setSelectedItem(rs.getString(4)); } s.close(); con.close(); } catch(Exception Q) { JOptionPane.showMessageDialog(this,Q); } } public void SaveRecord(){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path = "jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUsername =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); String sql = "UPDATE rooms SET Firstname='"+fname.getSelectedItem()+"',Lastname='"+lname.getText()+"',Contactnumber='"+contno.getSelectedItem()+"' WHERE '"+edpno.getText()+"'=edpno"; s.executeUpdate(sql); JOptionPane.showMessageDialog(this,"New room Record has been successfully saved"); dispose(); s.close(); con.close(); } catch(Exception Mismatch){ JOptionPane.showMessageDialog(this,Mismatch); } } public void actionPerformed (ActionEvent ako){ if (ako.getSource() == bRetrieve){ dispose(); } else if (ako.getSource() == bOK){ SaveRecord(); } } public static void main (String [] awtsave){ new Search(); } }

    Read the article

  • i want to access mysql database table on given conditions in drop down menu [on hold]

    - by user3909877
    as the code below is accesing the database table directly but i want it to display the table content on giving conditions in drop down menu like when i select islamabad in one drop down menu and lahore in other as given in code and press search buttonn then it display the table flights.but it is displaying it directly <p class="h2">Quick Search</p> <div class="sb2_opts"> <p> </p> <form method="post" action=""> <p>Enter your source and destination.</p> <p> From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p> To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php $from = isset($_POST['from'])?$_POST['from']:''; $to = isset($_POST['to'])?$_POST['to']:''; if( $from =='Islamabad'){ if($to == 'Lahore'){ $db_host = 'localhost'; $db_user = 'root'; $database = 'homedb'; $table = 'flights'; if (!mysql_connect($db_host, $db_user)) die("Can't connect to database"); if (!mysql_select_db($database)) die("Can't select database"); $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $fields_num = mysql_num_fields($result); echo "<h1>Table: {$table}</h1>"; echo "<table border='1'><tr>"; while($row = mysql_fetch_row($result)) { echo "<tr>"; // $row is array... foreach( .. ) puts every element // of $row to $cell variable foreach($row as $cell) echo "<td>$cell</td>"; echo "</tr>\n"; } } } mysqli_close($con); ?>

    Read the article

  • how to access mysql table from wamp database using this php code? [migrated]

    - by user3909877
    how to access tables from database by using php in wamp server.i have done the following code but its not working for some reason.is there anything to put in 'action=""'.it is not giving any error but displaying the same page.i want to display table from database on any different entry in dropdown menu and pressing search button.. <p class="h2">Quick Search</p> <div class="sb2_opts"> <p> </p> <form method="post" action="" > <p>Enter your source and destination.</p> <p> From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p> To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php if(isset($_POST['from']) and isset($_POST['To'])) { $from = $_POST['from'] ; $to = $_POST['To'] ; $table = array($from, $to); $con=mysqli_connect("localhost"); $mydb=mysql_select_db("homedb"); if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } switch ($table) { case array ("Islamabad", "Lahore") : $result = mysqli_query($con,"SELECT * FROM flights"); echo "</flights>"; //table name is flights break; case array ("Islamabad", "Murree") : $result = mysqli_query($con,"SELECT * FROM `isb to murree`"); echo "</`isb to murree`>"; //table name isb to murree ; break; case array ("Islamabad", "Muzaffarabad") : $result = mysqli_query($con,"SELECT * FROM `isb to muzz`"); echo "</`isb to muzz`>"; break; //..... //...... default: echo "Your choice is nor valid !!"; } } mysqli_close($con); ?>

    Read the article

  • logparser not matching on a LIKE pattern

    - by user79339
    Hi I seem to have the strangest problem. I am using logparser to search an event log for some text that I know is there (i copied and pasted the string from the event into the sql search string). But the sql LIKE statement is returning a empty results. But other LIKE statments seem to be working file. I have even tried using two '%' symbols in case the shell was trying to replace the search pattern with an environment variable '%%NavigationOccuredEventHandler%%', escaping the % with a \ and with a ' but all these just give me "No valid LIKE mask" error My logparser command - C:\Program Files\Log Parser 2.2LogParser.exe "select * from D:\Temp\07i132ppa1_app.evt where Message like '%NavigationOccuredEventHandler%' " -i:EVT -o:Datagrid The Entry in event log (found using "Select * from D:\Temp\07i132ppa1_app.evt" and doing a copy paste of relevant row) - 'D:\Temp\07i132ppa1_app.evt 5976788 2010-03-09 11:53:23 2010-03-09 11:53:23 2 1 Error event 0 None ICP Timestamp: 9/03/2010 1:53:23 AM Message: Error # 068464030040-07I132PPA1 System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.NullReferenceException: Object reference not set to an instance of an object. at ClientRegistration.Controller.ContactDetailsController.NavigationOccuredEventHandler(Object sender, NavigateEventArgs e) at Microsoft.ApplicationBlocks.UIProcess.UIPManager.NavigateEventHandler.Invoke(Object sender, NavigateEventArgs e) at Microsoft.ApplicationBlocks.UIProcess.UIPManager.InvokeEventHandlers(State state) in . . . Truncated for brevity ' output Statistics: Elements processed: 240993 Elements output: 0 Execution time: 59.47 seconds But if i searched for the pattern '%object reference not set%' it works fine, returns results. I copied and pasted the string into a dummy sql table and ran the sql query there and it works fine. Just doesn't seem to work in logparser. Very baffling. Any help would be much appreciated

    Read the article

  • Exception on SslStream.AuthenticateAsClient (The message was badly formatted)

    - by Noms
    I have got wierd problem going on. I am trying to connect to Apple server via TCP/SSL. I am using a Client certificate provided by Apple for push notifications. I installed the certificate on my server (Win2k3) in both Local Trusted Root certificates and Local Personal Certificates folder. Now I have a class library that deals with that connection, when i call this class library from a console application running from the server it works absolutely fine, but when i call that class library from an asp.net page or asmx web service I get the following exception. A call to SSPI failed, see inner exception. The message received was unexpected or badly formatted. This is my code: X509Certificate cert = new X509Certificate(certificateLocation, certificatePassword); X509CertificateCollection certCollection = new X509CertificateCollection(new X509Certificate[1] { cert }); // OPEN the new SSL Stream SslStream ssl = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); ssl.AuthenticateAsClient(ipAddress, certCollection, SslProtocols.Default, false); ssl.AuthenticateAsClient is where the error gets thrown. This is driving me nuts. If the console application can connect fine, there must be some problem with asp.net network layer security that is failing the authentication... not sure, perhaps need to add something or some sort of security policy in the web.config. Also just to point out that i can connect fine on my local development machine both with console and website. Anyone has got any ideas?

    Read the article

  • Passing parameters to telerik asp.net mvc grid

    - by GlobalCompe
    I have a telerik asp.net mvc grid which needs to be populated based on the search criteria the user enters in separate text boxes. The grid is using ajax method to load itself initially as well as do paging. How can one pass the search parameters to the grid so that it sends those parameters "every time" it calls the ajax method in response to the user clicking on another page to go to the data on that page? I read the telerik's user guide but it does not mention this scenario. The only way I have been able to do above is by passing the parameters to the rebind() method on client side using jquery. The issue is that I am not sure if it is the "official" way of passing parameters which will always work even after updates. I found this method on this post on telerik's site: link text I have to pass in multiple parameters. The action method in the controller when called by the telerik grid runs the query again based on the search parameters. Here is a snippet of my code: $("#searchButton").click(function() { var grid = $("#Invoices").data('tGrid'); var startSearchDate = $("#StartDatePicker-input").val(); var endSearchDate = $("#EndDatePicker-input").val(); grid.rebind({ startSearchDate: startSearchDate , endSearchDate: endSearchDate }); } );

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >