Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 15/643 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Compare Your Internet Cost and Speed to Global Averages [Infographic]

    - by ETC
    Internet pricing and speed varies wildly across the world. The US, for instance, currently ranks 15th in speed but enjoys reasonably priced internet access. How reasonably priced? If you’re a US citizen you likely have an average internet access speed of 4.8 mbps and you pay a little over $3 per mbps. If you’re in Sweden, however, you likely have an 18 mbps connection and you pay a scant 63 cents per mpbs. The real envy of the internet speed Olympics by far is Japan with a mighty 61 mbps at a mere 27 cents per mbps. Hit up the link below for the full infographic (or use this local mirror if you need to dodge a firewall), then sound off in the comments with how you compare on the international scale. Internet Speeds and Costs Around the World [via Daily Infographic] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • Achieve Named Criteria with multiple tables in EJB Data control

    - by Deepak Siddappa
    In EJB create a named criteria using sparse xml and in named criteria wizard, only attributes related to the that particular entities will be displayed.  So here we can filter results only on particular entity bean. Take a scenario where we need to create Named Criteria based on multiple tables using EJB. In BC4J we can achieve this by creating view object based on multiple tables. So in this article, we will try to achieve named criteria based on multiple tables using EJB.Implementation StepsCreate Java EE Web Application with entity based on Departments and Employees, then create a session bean and data control for the session bean.Create a Java Bean, name as CustomBean and add below code to the file. Here in java bean from both Departments and Employees tables three fields are taken. public class CustomBean { private BigDecimal departmentId; private String departmentName; private BigDecimal locationId; private BigDecimal employeeId; private String firstName; private String lastName; public CustomBean() { super(); } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setLocationId(BigDecimal locationId) { this.locationId = locationId; } public BigDecimal getLocationId() { return locationId; } public void setEmployeeId(BigDecimal employeeId) { this.employeeId = employeeId; } public BigDecimal getEmployeeId() { return employeeId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Open the sessionEJb file and add the below code to the session bean and expose the method in local/remote interface and generate a data control for that. Note:- Here in the below code "em" is a EntityManager. public List<CustomBean> getCustomBeanFindAll() { String queryString = "select d.department_id, d.department_name, d.location_id, e.employee_id, e.first_name, e.last_name from departments d, employees e\n" + "where e.department_id = d.department_id"; Query genericSearchQuery = em.createNativeQuery(queryString, "CustomQuery"); List resultList = genericSearchQuery.getResultList(); Iterator resultListIterator = resultList.iterator(); List<CustomBean> customList = new ArrayList(); while (resultListIterator.hasNext()) { Object col[] = (Object[])resultListIterator.next(); CustomBean custom = new CustomBean(); custom.setDepartmentId((BigDecimal)col[0]); custom.setDepartmentName((String)col[1]); custom.setLocationId((BigDecimal)col[2]); custom.setEmployeeId((BigDecimal)col[3]); custom.setFirstName((String)col[4]); custom.setLastName((String)col[5]); customList.add(custom); } return customList; } Open the DataControls.dcx file and create sparse xml for customBean. In sparse xml navigate to Named criteria tab -> Bind Variable section, create two binding variables deptId,fName. In sparse xml navigate to Named criteria tab ->Named criteria, create a named criteria and map the query attributes to the bind variables. In the ViewController create a file jspx page, from data control palette drop customBeanFindAll->Named Criteria->CustomBeanCriteria->Query as ADF Query Panel with Table. Run the jspx page and enter values in search form with departmentId as 50 and firstName as "M". Named criteria will filter the query of a data source and display the result like below.

    Read the article

  • Deloitte 2013 Global Contact Center Survey

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 "77% of contact centers expect to maintain or grow in size in the next 12-24 months." This is one of the findings of Deloitte's 2013 Global Contact Center Survey in which there are plenty of great business opportunities for all smart CX consultants and integrators using Oracle Service solutions. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Country specific content vs global content

    - by Ando
    I have a global product presentation website myproduct.com For certain countries I also own the country domain: myproduct.co.uk, myproduct.com.au, myproduct.es, myproduct.de, etc. The presentation website is translated in multiple languages and I set up redirects: myproduct.es will redirect to myproduct.com/es/, myproduct.de will redirect to myproduct.com/de/, etc. . The content so far is the same, just translated in different languages. The advantages are that it's easy to keep the content aligned - everything is managed from one centralized dashboard (I'm using Wordpress with qtranslate). Now I'm running into trouble as for different countries I want localized content - for UK I want to run different promotions and use a different reseller than for .com.au so I would like that users coming from myproduct.co.uk see something different than those coming from myproduct.com.au (and not be redirected to myproduct.com as they are right now). How can I achieve this? I could duplicate the whole main website and modify only certain parts but then I would have a lot of duplicate content (e.g. info about how the product works) and I would have pages that are likely to change (FAQ page) that I would have to keep updated over all websites. I can duplicate only partially the main website: on the localized website I would have only the pages that are different and then all other links would point to the .com site. This would solve the duplication problem but would cause confusion for the user as you would navigate from .co.uk to .com without noticing and then wonder how to get back. Other, better option?

    Read the article

  • Oracle Secure Global Desktop (SGD) 5.1

    - by wcoekaer
    Last week, we released the latest update of Oracle Secure Global Desktop. Release 5.1 introduces a number of bug fixes and smaller changes but the most interesting one is definitely increased support for html5-based client access. In SGD 5.0 we added support for Apple iPads using Safari to connect to SGD and display your session right inside the browser. The traditional model for SGD is that you connect using a webbrowser to the webtop and applications that are displayed locally using a local client (tta). This client gets installed the first time you connect. So in the traditional model (which works very well...) you need a webbrowser, java and the tta client. With the addition of html5 support, there's no longer a need to install a local client, in fact, there is also no longer a need to have java installed. We currently support Chrome as a browser to enable html5 clients. This allows us to enable html5 on the android devices and also on desktops running Chrome (Windows, MacOS X, Linux). Connections will work transparently across proxy servers as well. So now you can run any SGD published app or desktop right from your webbrowser inside a browser window. This is very convenient and cool.

    Read the article

  • On Developing Web Services with Global State

    - by user74418
    I'm new to web programming. I'm more experienced and comfortable with client-side code. Recently, I've been dabbling in web programming through Python's Google App Engine. I ran into some difficulty while trying to write some simple apps for the purposes of learning, mainly involving how to maintain some kind of consistent universally-accessible state for the application. I tried to write a simple queueing management system, the kind you would expect to be used in a small clinic, or at a cafeteria. Typically, this is done with hardware. You take a number from a ticketing machine, and when your number is displayed or called you approach the counter for service. Alternatively, you could be given a small pager, which will beep or vibrate when it is your turn to receive service. The former is somewhat better in that you have an idea of how many people are still ahead of you in the queue. In this situation, the global state is the last number in queue, which needs to be updated whenever a request is made to the server. I'm not sure how to best to store and maintain this value in a GAE context. The solution I thought of was to keep the value in the Datastore, attempt to query it during a ticket request, update the value, and then re-store it with put. My problem is that I haven't figured out how to lock the resource so that other requests do not check the value while it is in the middle of being updated. I am concerned that I may end up ticket requests that have the same queue number. Also, the whole solution feels awkward to me. I was wondering if there was a more natural way to accomplish this without having to go through the Datastore. Can anyone with more experience in this domain provide some advice on how to approach the design of the above application?

    Read the article

  • System.Reflection - Global methods aren't available for reflection

    - by mrjoltcola
    I have an issue with a semantic gap between the CLR and System.Reflection. System.Reflection does not (AFAIK) support reflecting on global methods in an assembly. At the assembly level, I must start with the root types. My compiler can produce assemblies with global methods, and my standard bootstrap lib is a dll that includes some global methods. My compiler uses System.Reflection to import assembly metadata at compile time. It seems if I depend on System.Reflection, global methods are not a possibility. The cleanest solution is to convert all of my standard methods to class static methods, but the point is, my language allows global methods, and the CLR supports it, but System.Reflection leaves a gap. ildasm shows the global methods just fine, but I assume it does not use System.Reflection itself and goes right to the metadata and bytecode. Besides System.Reflection, is anyone aware of any other 3rd party reflection or disassembly libs that I could make use of (assuming I will eventually release my compiler as free, BSD licensed open source).

    Read the article

  • Cleaning up temp files in Mac OS X

    - by deddebme
    I was a Windows person for more than 10 years. Around 4 months ago, I switched to Mac, and I have never looked back. But there is one thing that bothers me, which is my Mac partition volume is losing space slowly and gradually. I am pretty sure there are a lot of orphaned temporary files laying around in the volume. I know where to find the obsoleted temp files in my Windows partition, how about in Mac OSX?

    Read the article

  • Cleaning up temp files in OSX

    - by deddebme
    I was a Windows person for more than 10 years. Around 4 months ago, I switched to Mac, and I have never looked back. But there is one thing that bothers me, which is my Mac partition volume is losing space slowly and gradually. I am pretty sure there are a lot of orphaned temporary files laying around in the volume. I know where to find the obsoleted temp files in my Windows partition, how about in Mac OSX?

    Read the article

  • CPU Temp Advise

    - by compu
    I have a AMD X2 64 7750BE,my system bios show my CPU temp is at 64 degrees centigrade(64C).is it too hot.MY CPU fan RPM is 2355,is the rpm speed normal..please advise Thanks in advance

    Read the article

  • Moving Temp folder location

    - by Chadworthington
    My C drive is a small SSD that I don't want to wear out or fill with unnecessary stuff. Is there any harm in moving the temp folder to another drive? I am under the impression that Windows will not need to look for files here, for example, in the case of uninstalls... Is this the case?

    Read the article

  • Make a temp network.

    - by giodamelio
    I have a XAMPP test server running on my Windows Vista laptop. I also have a small wireless router. I would like be able to create a small temp network that could broadcast that network so any other computers connected to the wifi can go to localhost/ or an internal ip address and view my server. The router will not be able to be connected to the internet, but I don't see how that would make a difference Thanks giodamelio

    Read the article

  • IIS 6+ASP.NET - many temp files generated

    - by moshe_ravid
    I have a ASP.NET + some .NET web-services running on IIS 6 (win 2003 server). The issue is that IIS is generating a lot (!) of files in "c:\WINDOWS\Temp" directory. a lot of files means thousands of files, which get to more than 3G of size so far. The files are generated by this command: C:\WINDOWS\SysWOW64\inetsrv "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe" /t:library /utf8output /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\vfagt\113819dd\db0d5802\assembly\dl3\fedc6ef1\006e24d8_3bc9ca01\VfAgentWService.DLL" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Web.Services\2.0.0.0__b03f5f7f11d50a3a\System.Web.Services.dll" /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll" /out:"C:\WINDOWS\TEMP\9i_i2bmg.dll" /debug- /optimize+ /nostdlib /D:_DYNAMIC_XMLSERIALIZER_COMPILATION "C:\WINDOWS\TEMP\9i_i2bmg.0.cs" The files in the temp directory are pairs of *.out & *.err, where the *.err file is zero size, and the *.out file contains the compilation output messages. What is causing IIS to generate so many files? How can I prevent it? UPDATE: The problem is that the command i described above (csc.exe) is being executed many (many) times, causing the .out & .err to be generated so many times, until it consumes the disk space. So - my question is: what is causing this command to run so many times? (i don't have that many .aspx & .asmx files in my web app). Thanks, Moe

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • Design Issues With Forms

    - by ultan o'broin
    Interesting article on UX Matters, well worth reading, especially the idea that global design research can take for a better user experience in all languages: Label Placement in Austrian Forms, with Some Lessons for English Forms What is perhaps underplayed here is the cultural influence of how people worked with forms in the past, and how a proper global user-centered design process needs to address this issue and move usability gains (in the enterprise space, productivity especially) in the right direction.

    Read the article

  • How do I reinstate my admin user privileges to global read/write

    - by Matt
    I am running Ubuntu 12.04 LTS. I only have the one user which I created when I installed Ubuntu. Everything has been fine - love it - until I updated a software package recently from the command line using sudo (not gksudo). I was having a little bother which did not make sense to me and in a fluff changed my user read/write privileges through the GUI (not even clear how I got there!). After restart I was stuck in a login loop - using the right login password but kept getting looped back to the login and could only login as Guest. I could still login with my user/password via ctrl + alt + f1 Eventually I was able to login again at start up. Not sure exactly what it was I changed that worked but it was one of/or a combination of installing latest security updates, changing login manager from LightDM to DGM and back again, removing the ICE/Xauthority and chown user. Current dilemma is my primary admin user privileges were read only. In the command line ls -ls /home/user returned this value: drwx------ 48 username username 20480 I have since changed this using sudo chmod 0755 /home/username (from my limited understanding 755 should return my user privileges to their original read/write glory). ls -ld /home/user currently shows my user privileges as: drwxr-xr-x 48 username username 20480 I still seem to have only read access permissions. I've been through lots of threads (and the help file) that talk about creating new users/groups permissions etc. but specific info on returning my existing global/admin/primary users privileges to what they were when I first created that user - baffling me. I feel this is something really simple I'm just not getting it. Please help! sudo mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /proc type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusect1 (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=07pe tmpfs55) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw, ,nosuid,nodev,size=5242880 none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/meng/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=meng) none on /tmp/guest-1R2Fi5 type tmpsf (rw,mode=700)

    Read the article

  • Three Global Telecoms Soar With Siebel

    - by michael.seback
    Deutsche Telekom Group Selects Oracle's Siebel CRM to Underpin Next-Generation CRM Strategy The Deutsche Telekom Group (DTAG), one of the world's leading telecommunications companies, and a customer of Oracle since 2001, has invested in Oracle's Siebel CRM as the standard platform for its Next Generation CRM strategy; a move to lower the cost of managing its 120 million customers across its European businesses. Oracle's Siebel CRM is planned to be deployed in Germany and all of the company's European business within five years. "...Our Next-Generation strategy is a significant move to lower our operating costs and enhance customer service for all our European customers. Not only is Oracle underpinning this strategy, but is also shaping the way our company operates and sells to customers. We look forward to working with Oracle over the coming years as the technology is extended across Europe," said Dr. Steffen Roehn, CIO Deutsche Telekom AG... "The telecommunications industry is currently undergoing some major changes. As a result, companies like Deutsche Telekom are needing to be more intelligent about the way they use technology, particularly when it comes to customer service. Deutsche Telekom is a great example of how organisations can use CRM to not just improve services, but also drive more commercial opportunities through the ability to offer highly tailored offers, while the customer is engaged online or on the phone," said Steve Fearon, vice president CRM, EMEA Read more. Telecom Argentina S.A. Accelerates Time-to-Market for New Communications Products and Services Telecom Argentina S.A. offers basic telephone, urban landline, and national and international long-distance services...."With Oracle's Siebel CRM and Oracle Communication Billing and Revenue Management, we started a technological transformation that allows us to satisfy our critical business needs, such as improving customer service and quickly launching new phone and internet products and services." - Saba Gooley, Chief Information Officer, Wire Line and Internet Services, Telecom Argentina S.A.Read more. Türk Telekom Develops Benefits-Driven CRM Roadmap Türk Telekom Group provides integrated telecommunication services from public switched telephone network (PSTN) and global systems for mobile communications technology (GSM). to broadband internet...."Oracle Insight provided us with a structured deployment approach that makes sense for our business. It quantified the benefits of the CRM solution allowing us to engage with the relevant business owners; essential for a successful transformation program." - Paul Taylor, VP Commercial Transformation, Türk Telekom Read more.

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • SQL SERVER – Identify Numbers of Non Clustered Index on Tables for Entire Database

    - by pinaldave
    Here is the script which will give you numbers of non clustered indexes on any table in entire database. SELECT COUNT(i.TYPE) NoOfIndex, [schema_name] = s.name, table_name = o.name FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name ORDER BY schema_name, table_name Here is the small story behind why this script was needed. I recently went to meet my friend in his office and he introduced me to his colleague in office as someone who is an expert in SQL Server Indexing. I politely said I am yet learning about Indexing and have a long way to go. My friend’s colleague right away said – he had a suggestion for me with related to Index. According to him he was looking for a script which will count all the non clustered on all the tables in the database and he was not able to find that on SQLAuthority.com. I was a bit surprised as I really do not remember all the details about what I have written so far. I quickly pull up my phone and tried to look for the script on my custom search engine and he was correct. I never wrote a script which will count all the non clustered indexes on tables in the whole database. Excessive indexing is not recommended in general. If you have too many indexes it will definitely negatively affect your performance. The above query will quickly give you details of numbers of indexes on tables on your entire database. You can quickly glance and use the numbers as reference. Please note that the number of the index is not a indication of bad indexes. There is a lot of wisdom I can write here but that is not the scope of this blog post. There are many different rules with Indexes and many different scenarios. For example – a table which is heap (no clustered index) is often not recommended on OLTP workload (here is the blog post to identify them), drop unused indexes with careful observation (here is the script for it), identify missing indexes and after careful testing add them (here is the script for it). Even though I have given few links here it is just the tip of the iceberg. If you follow only above four advices your ship may still sink. Those who wants to learn the subject in depth can watch the videos here after logging in. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I've Heard Global Variables Are Bad, What Alternative Solution Should I Use?

    - by Jay
    I've read all over the place that global variables are bad and alternatives should be used. In Javascript specifically, what solution should I choose. I'm thinking of a function, that when fed two arguments (function globalVariables(Variable,Value)) looks if Variable exists in a local array and if it does set it's value to Value, else, Variable and Value are appended. If the function is called without arguments (function globalVariables()) it returns the array. Perhaps if the function is fired with just one argument (function globalVariables(Variable)) it returns the value of Variable in the array. What do you think? I'd like to hear your alternative solutions and arguments for using global variables.

    Read the article

  • access denied trying extracting an archive on the windows user temp folder

    - by Hanan
    I'm trying to run a command-line process (which is extraction of a .7z archive) on a file that lies in a temporary folder on the windows user temp directory (C:\Documents and Settings\User\Local Settings\Temp), using Process in my c# app. I think the process return error that happens because of "access denied" because I can see a win32Exception with error code 5 when I dig in the prcoess object of .NET. doing the same on some other location worked fine before, so I guess maybe it's something I'm not supposed to do ? (running a process to use a file on the the %TEMP%) perhaps I need to pass security somehow?

    Read the article

  • convert xml.documentElement.getElementsByTagName("marker")[index].getAttribute(temp) to string in ja

    - by user324884
    I am parsing xml file in javascript and after that want to cancatenate all the the data into string. but failing to do the same and it is returning undefined. GDownloadUrl("./include/dataemp2.xml", function(data) { var xml = GXml.parse(data); markers = xml.documentElement.getElementsByTagName("marker"); for(var t=0;t<18;t++) { var temp= markers[index].getAttribute("address"); html = html + temp; } }); it is returning as undefined because temp is not concatenating in "html"; whereas when i do this like html = html +markers[index].getAttribute("address"); it is giving me expected output;

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >