Search Results

Search found 10006 results on 401 pages for 'symbol tables'.

Page 6/401 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • EF Doesn't Like Same Named Tables

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2013/07/02/153327.aspxIt's another week and another restriction imposed by the Entity Framework (EF). Don't get me wrong. I like EF, but I don't like how it restricts you in different ways. At this point you may be asking yourself the question: how can you have more than one table with the same name?The answer is to have tables in different schemas. I do this to partition the data based on the area of concern. It allows security to be assigned conveniently. A lot of people don't use schemas. I love them. But this article isn't about schemas.In the situation I have two tables:Contact.PersonEmployee.PersonThe first contains the basic, more public information such as the name. The second contains mostly HR specific information. I then mapped these tables to two classes. I stuck to a Table per Class (TPC) mapping, because of problems I've had in the past implementing inheritance with EF. The following code gives you the basic contents of the classes.[Table("Person", Schema = "Employee")]public class Employee {   ...   public int PersonId { get; set; }   [ForeignKey("PersonId")]   public virtual Person Person { get; set; }}[Table("Person", Schema = "Contact")]public class Person {   [Key]   public int Id { get; set; }   ...}This seemingly simple scenario just doesn't work. The problem occurs when you try to add a Person to the DbContext. You get an InvalidOperationException with the following text:The entity types 'Employee' and 'Person' cannot share table 'People' because they are not in the same type hierarchy or do not have a valid one to one foreign key relationship with matching primary keys between them..This is interesting for a couple of reasons. First, there is no People table in my database. Second, I have used the SetInitializer method to stop a database from being created, so it shouldn't be thinking about new tables.The solution to my problem was to change the name of my Employee.Person table. I decided to name it Employee.Employee. It's not ideal, but it gets me past the EF limitation. I hope that this article will help someone else that has the same problem.

    Read the article

  • Symbol lookup error while starting pidgin in Arch

    - by Hossein Mobasher
    I have just installed pidgin from the source code that i downloaded from pidgin site, it compile correctly with using below commands : ./configure --disable-gtkspell ; make ; make install but, when i try to start pidgin from terminal, occurres an error :? pidgin: symbol lookup error: /usr/lib/libfarstream-0.1.so.0: undefined symbol: g_key_file_free how can i solve this problem ? Thanks for your attention :)

    Read the article

  • Suddenly get "apt-get: symbol lookup error" when using apt-get

    - by marue
    I have no idea what has gone wrong here. I have installed the audiotool sox, then tried to install the library libsox-fmt-all and all of a sudden apt-get refused to work. I cannot use it now, neither to update nor to install anything. Could somebody suggest what i could do to get it back to work? Here is the complete message it throws: apt-get: symbol lookup error: /usr/lib/libstdc++.so.6: undefined symbol: _ZNSt7num_getIcSt19istreambuf_iteratorIcSt11char_traitsIcEEE2idE, version GLIBCXX_3.4

    Read the article

  • bluetooth fails to get enabled; Unknown symbol security_sk_clone (err 0) in dmesg

    - by Srivatsa Kanchi
    $ uname -a Linux ubuntu1110 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux laptop model: Lenovo W520 upon trying to enable bluetooth, the led turns on but, bluetooth is fails to get enabled in the preferences also seen below error message in dmesg [78183.389048] usb 1-1.4: new full speed USB device number 19 using ehci_hcd [78183.504129] bluetooth: Unknown symbol security_sk_clone (err 0) [78183.505084] bluetooth: Unknown symbol security_sk_clone (err 0) [78183.505189] bluetooth: Unknown symbol security_sk_clone (err 0) [78183.505294] bluetooth: Unknown symbol security_sk_clone (err 0)

    Read the article

  • Oracle 11gR2 exp does not export some tables

    - by Tilo Prütz
    I have an Oracle 11g (11.2.0.1) Database running on Linux (x64). Within the database I have a schema and 33 tables for it. When I log in via sqlplus I can list all the tables via SELECT OBJECT_NAME FROM USER_OBJECTS WHERE OBJECT_TYPE = 'TABLE'; But when I export the Tablespace using exp ... BUFFER=65536 FULL=N COMPRESS=N CONSISTENT=Y TABLESPACES=... FILE=... Then it only exports 24 of the 33 tables. I have tried to export the missing tables via exp ... TABLES=<missing_table> ... But then I get an error: EXP-00011: NPSMIGRO2_CM.DEFAULT_USR_ATTR_VALUES does not exist How can I find out what's wrong here? How can I export all the tables?

    Read the article

  • MySQL Privileges required to GRANT EVENT, EXECUTE, LOCK TABLES, and TRIGGER

    - by Brad
    I have an account, user_a, and I would like to grant all available permissions on some_db to user_b. I have tried the following query: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `some_db`.* TO 'user_b'@'%' WITH GRANT OPTION The result: Access denied for user 'user_a'@'%' to database 'some_db' Some experimentation has shown me that the only permissions my account (user_a) is unable to grant are EVENT, EXECUTE, LOCK TABLES, and TRIGGER. What privileges are required for my account to GRANT these privileges to another user? If I run SHOW GRANTS, I get this output: "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER ON *.* TO 'user_a'@'%' IDENTIFIED BY PASSWORD '1234567890abcdef' WITH GRANT OPTION" "GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON `some_other_unrelated_db`.* TO 'user_a'@'%'" "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE ROUTINE, ALTER ROUTINE ON `another_unrelated_db`.* TO 'user_a'@'%' WITH GRANT OPTION"

    Read the article

  • Existing tables with binaries to use filestream

    - by user1098487
    I've got a few tables for which I want to use filestream storage. These tables already contain binary data and have rowguids. However at the time they were were created, the tables were not added to a filestream enabled filegroup. What is the best way to have these tables use filestream at this point? Do I need to drop + recreate the tables and migrate the data? Is there an easier way? The database already has filestream enabled and there are other tables which are using them.

    Read the article

  • Achieve Named Criteria with multiple tables in EJB Data control

    - by Deepak Siddappa
    In EJB create a named criteria using sparse xml and in named criteria wizard, only attributes related to the that particular entities will be displayed.  So here we can filter results only on particular entity bean. Take a scenario where we need to create Named Criteria based on multiple tables using EJB. In BC4J we can achieve this by creating view object based on multiple tables. So in this article, we will try to achieve named criteria based on multiple tables using EJB.Implementation StepsCreate Java EE Web Application with entity based on Departments and Employees, then create a session bean and data control for the session bean.Create a Java Bean, name as CustomBean and add below code to the file. Here in java bean from both Departments and Employees tables three fields are taken. public class CustomBean { private BigDecimal departmentId; private String departmentName; private BigDecimal locationId; private BigDecimal employeeId; private String firstName; private String lastName; public CustomBean() { super(); } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setLocationId(BigDecimal locationId) { this.locationId = locationId; } public BigDecimal getLocationId() { return locationId; } public void setEmployeeId(BigDecimal employeeId) { this.employeeId = employeeId; } public BigDecimal getEmployeeId() { return employeeId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Open the sessionEJb file and add the below code to the session bean and expose the method in local/remote interface and generate a data control for that. Note:- Here in the below code "em" is a EntityManager. public List<CustomBean> getCustomBeanFindAll() { String queryString = "select d.department_id, d.department_name, d.location_id, e.employee_id, e.first_name, e.last_name from departments d, employees e\n" + "where e.department_id = d.department_id"; Query genericSearchQuery = em.createNativeQuery(queryString, "CustomQuery"); List resultList = genericSearchQuery.getResultList(); Iterator resultListIterator = resultList.iterator(); List<CustomBean> customList = new ArrayList(); while (resultListIterator.hasNext()) { Object col[] = (Object[])resultListIterator.next(); CustomBean custom = new CustomBean(); custom.setDepartmentId((BigDecimal)col[0]); custom.setDepartmentName((String)col[1]); custom.setLocationId((BigDecimal)col[2]); custom.setEmployeeId((BigDecimal)col[3]); custom.setFirstName((String)col[4]); custom.setLastName((String)col[5]); customList.add(custom); } return customList; } Open the DataControls.dcx file and create sparse xml for customBean. In sparse xml navigate to Named criteria tab -> Bind Variable section, create two binding variables deptId,fName. In sparse xml navigate to Named criteria tab ->Named criteria, create a named criteria and map the query attributes to the bind variables. In the ViewController create a file jspx page, from data control palette drop customBeanFindAll->Named Criteria->CustomBeanCriteria->Query as ADF Query Panel with Table. Run the jspx page and enter values in search form with departmentId as 50 and firstName as "M". Named criteria will filter the query of a data source and display the result like below.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • SQL SERVER – Identify Numbers of Non Clustered Index on Tables for Entire Database

    - by pinaldave
    Here is the script which will give you numbers of non clustered indexes on any table in entire database. SELECT COUNT(i.TYPE) NoOfIndex, [schema_name] = s.name, table_name = o.name FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name ORDER BY schema_name, table_name Here is the small story behind why this script was needed. I recently went to meet my friend in his office and he introduced me to his colleague in office as someone who is an expert in SQL Server Indexing. I politely said I am yet learning about Indexing and have a long way to go. My friend’s colleague right away said – he had a suggestion for me with related to Index. According to him he was looking for a script which will count all the non clustered on all the tables in the database and he was not able to find that on SQLAuthority.com. I was a bit surprised as I really do not remember all the details about what I have written so far. I quickly pull up my phone and tried to look for the script on my custom search engine and he was correct. I never wrote a script which will count all the non clustered indexes on tables in the whole database. Excessive indexing is not recommended in general. If you have too many indexes it will definitely negatively affect your performance. The above query will quickly give you details of numbers of indexes on tables on your entire database. You can quickly glance and use the numbers as reference. Please note that the number of the index is not a indication of bad indexes. There is a lot of wisdom I can write here but that is not the scope of this blog post. There are many different rules with Indexes and many different scenarios. For example – a table which is heap (no clustered index) is often not recommended on OLTP workload (here is the blog post to identify them), drop unused indexes with careful observation (here is the script for it), identify missing indexes and after careful testing add them (here is the script for it). Even though I have given few links here it is just the tip of the iceberg. If you follow only above four advices your ship may still sink. Those who wants to learn the subject in depth can watch the videos here after logging in. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Linking problems using libcurl with Visual C++ 2005: "unresolved external symbol __imp__curl_easy_se

    - by user88595
    Hi, I am planning to use libcurl in my project. I had downloaded the library source,built and integrated it in a small POC application. I am able to build and run the application without any issues with the generated libcurl.dll and libcurl_imp.lib files. Now when I integrate the same library in my project I am getting linker errors. 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_setopt 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_perform 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_cleanup 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_global_init 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_init I have researched and tried all manners of workarounds like adding CURL_STATICLIB definitions , additional libraries , changing to /MT even copying the libs to the release directory but nothing seems to work. As far as I can see the only difference between approach #1 and #2 in my steps are #1 is an console application using the libcurl.dll while in my main project this is another dll which is trying to link to libcurl.dll.. Would that necessitate any change in approach? Can I use the same generated multi threaded DLL /MD file for both(Tried /MT also with no success)? Any other ideas? Following are the linker options. -------------------------------------------------Working------------------------------------------------- /OUT:"C:\SampleFTP\Release\SampleFTP.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\SampleFTP\SampleFTP\Release" /MANIFEST /MANIFESTFILE:"Release\SampleFTP.exe.intermediate.manifest" /DEBUG /PDB:"c:\SampleFTP\release\SampleFTP.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG /MACHINE:X86 /ERRORREPORT:PROMPT libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib -------------------------------------------------Working------------------------------------------------- ----------------------------------------------NotWorking------------------------------------------------- /OUT:".......\nt\Win32\Release/foo__tests.dll" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\FullLibPath\libcurl_libs" /LIBPATH:"......\nt\Win32\Release" /DLL /MANIFEST /MANIFESTFILE:".\foo_tests\Win32\Release\foo_tests.dll.intermediate.manifest" /DEBUG /PDB:".......\nt\Win32\Release/foo_tests.pdb" /OPT:REF /OPT:ICF /LTCG /IMPLIB:".......\nt\Win32\Release/foo_tests.lib" /MACHINE:X86 /ERRORREPORT:PROMPT odbc32.lib odbccp32.lib util_process.lib wsock32.lib Version.lib libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "......\nt\win32\release\otherlib1.lib" "......\nt\win32\release\otherlib2.lib" ----------------------------------------------NotWorking-------------------------------------------------

    Read the article

  • Flex 3: Embedding MovieClip Symbol to Image Control programmatically

    - by BlueDude
    I've reviewed all the documentation and Google results surrounding this and I think I have everything setup correctly. My problem is that the symbol is not appearing in my app. I have a MovieClip symbol that I've embedded to my Flex Component. I need to create a new Image control for each item from my dataProvider and assign this embedded symbol as the Image's source. I thought it was simple but apparently not. Here's a stub of the code: [Embed(source="../assets/assetLib.swf", symbol="StarMC")] private var StarClass:Class; protected function rebuildChildren():void { iterator.seek( CursorBookmark.FIRST ); while ( !iterator.afterLast ) { child = new Image(); var asset:MovieClipAsset = new StarClass() as MovieClipAsset; (child as Image).source = asset; } } I know the child is being created because I can draw a shape and and that appears. Am I doing something wrong? Thank you!

    Read the article

  • The case against INFORMATION_SCHEMA views

    - by AaronBertrand
    In SQL Server 2000, INFORMATION_SCHEMA was the way I derived all of my metadata information - table names, procedure names, column names and data types, relationships... the list goes on and on. I used the system tables like sysindexes from time to time, but I tried to stay away from them when I could. In SQL Server 2005, this all changed with the introduction of catalog views. For one thing, they're a lot easier to type. sys.tables vs. INFORMATION_SCHEMA.TABLES? Come on; no contest there - even...(read more)

    Read the article

  • Microsoft Access 2010: How to Modify Tables

    As you work with Microsoft Access 2010, it is highly likely that you will run in to times where you need to modify the fields contained within your tables. Luckily, this is a task that is not hard to accomplish, and this tutorial will teach you how to do so. Before you begin modifying tables, you should be aware that there are basically three different ways in which you can affect or control the type of data that enters your fields, which are data types, character limits, and validation rules. We will be taking a look at them today, so let's begin, shall we? Keep in mind that for this tutor...

    Read the article

  • Export symbol as png

    - by Etiennebr
    I'd like to export plotting symbols form R as a png graphic. But I haven't found a perfect way yet. Using png("symbol.png",width=20, height=20, bg="transparent") par(mar=c(0,0,0,0)) plot.new() symbols(1, 1, circles=0.3, bg=2, inches=FALSE, lwd=2, bty="n") dev.off() creates a little border around the symbol (I'd like it to be transparent) and the symbol isn't filling the whole space. Is there a more specific way of doing this ?

    Read the article

  • JNI issue: symbol lookup error by FileHandle in C++ DLL

    - by MohamedMansour
    I made JNI functions and linked them with the c++ dynamic library successfully. I got all of them working just fine, but I had an issue for one function, I got symbol lookup error from the FileHandle class that I used in the c++ that I use to read data from file. Knowing that it's working on a normal c++ project, but not in the DLL. /usr/lib/jvm/jdk1.7.0/bin/java: symbol lookup error: /home/.../NetBeansProjects/TRIOGUI/dist/libNativeAdd.so: undefined symbol: _ZN5Gdsii9GdsParserC1EPKcN7SoftJin10FileHandle8FileTypeEN5boost8functionIFvS2_ESaIvEEE Java Result: 127 Can anybody help me please? :)

    Read the article

  • Microsoft Access 2010: How to Add, Edit, and Delete Data in Tables

    Tables are such an integral part of databases and corresponding tasks in Access 2010 because they act as the centers that hold all the data. They may be basic in format, but their role is undeniably important. So, to get you up to speed on working with tables, let's begin adding, editing, and deleting data. These are very standard tasks that you will need to employ from time to time, so it is a good idea to start learning how to execute them now. As is sometimes the case with our tutorials, we will be working with a specific sample. To learn the tasks, read over the tutorial and then apply...

    Read the article

  • Fusion Tables API enfocando a los desarrolladores

    Fusion Tables API enfocando a los desarrolladores En este programa presentaremos una visión general de las novedades tecnológicas desde el equipo de relaciones para desarrolladores de la región de sur de Latinoamérica. Seguiremos presentando nuestro enfoque de desarrollo, ingeniería y las mejores prácticas para implementar tecnología Google favoreciendo la evolución de soluciones tecnológicas. Luego nos introduciremos en un escenario técnico en donde analizaremos la solución Fusion Tables API para desarrolladores(seguimos trabajando sobre los entornos de persistencia en la nube). Finalmente estaremos conversando con la comunidad de desarrollo, resolviendo un desafío técnico y premiando todo el talento regional From: GoogleDevelopers Views: 0 1 ratings Time: 00:00 More in Education

    Read the article

  • Introduction to SQL Server 2014 CTP1 Memory-Optimized Tables

    There are a number of new features that became available with SQL Server 2014. One of the more exciting features is the new Memory-Optimized tables. In this article Greg Larson explores how to create Memory-Optimized tables, and what he's found during his initial exploration of using this new type of table. Countless happy developers. One award-winning bundle.The SQL Developer Bundle can transform the way you and your team work, aiding collaboration, efficiency, and consistency. Download your free trial now.

    Read the article

  • How to find classes that use certain DB tables

    - by Songo
    Problem: I'm asked to prepare a document where all our DB tables are listed and I'm supposed to list all Controllers that uses these DB tables for read and another list for Controllers that do write operations. Ex: +------------------------------------------+------------+ | DB table | tbl_Orders | +------------------------------------------+------------+ |Controllers that perform read operations | ?? | +------------------------------------------+------------+ |Controllers that perform write operations | ?? | +------------------------------------------+------------+ We are trying to write some documentation for a legacy system built using Zend framework. The code is scattered everywhere. There is code in the Controllers, in the models and even in the views. The application uses PROPEL as an ORM. What makes this really difficult is that the Controller may not be directly calling the table, but it may be instantiating a model class that calls that table. Is there an educated way to approach this crazy task? Note: Searching for the table name won't provide a solution because if a model uses that table I wouldn't know which Controller is using that model.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >