Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 751/911 | < Previous Page | 747 748 749 750 751 752 753 754 755 756 757 758  | Next Page >

  • Assigning a MVC Controller property from Asp.Net page

    - by JasonMHirst
    I don't know if I've understanding MVC correctly if my question makes no sense, but I'm trying to understand the following: I have some code on a controller that returns JSON data. The JSON data is populated based on a choice from a dropdown box on an Asp.Net page. I thought (incorrectly) that Session variables would be shared between the Asp.Net project and the MVC Project. What I'd like to do therefore (if this is possible), is to call a Sub on the MVC that sets a variable before the JSON query is run. I have the following: Sub SetCountryID(ByVal CountryID As Integer) Me.pCountrySelectedID = CountryID End Sub Which I can call by the following: Response.Write("http://localhost:7970/Home/SetCountryID/?CountryID=44") But this then results in a blank page - again obviouslly totally incorrect! Am I going about MVC the wrong way or do I still have a hell of a lot more learning to do? Is this even possible to do?

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • CouchDB Map/Reduce raises execption in reduce function?

    - by fuzzy lollipop
    my view generates keys in this format ["job_id:1234567890", 1271430291000] where the first key element is a unique key and the second is a timestamp in milliseconds. I run my view with this elapsed_time?startkey=["123"]&endkey=["123",{}]&group=true&group_level=1 and here is my reduce function, the intention is to reduce the output to get the earliest and latest timestamps and return the difference between them and now function(keys,values,rereduce) { var now = new Date().valueOf(); var first = Number.MIN_VALUE; var last = Number.MAX_VALUE; if (rereduce) { first = Math.max(first,values[0].first); last = Math.min(last,values[0].last); } else { first = keys[0][0][1]; last = keys[keys.length-1][0][1]; } return {first:now - first, last:now - last}; } and when processing a query it constantly raises the following execption: function raised exception (new TypeError("keys has no properties", "", 1)) I am making sure not to reference keys inside my rereduce block. Why does this function constantly raise this exception?

    Read the article

  • Soundcload API, PHP, and OAuth

    - by fedeisas
    Hi guys. I'm building a site, and I need to query my last two tracks from my soundcloud account and display them on my page. I've read the Soundcloud API documentation but it seems obscure and far from my reach. I've installed the PHP library for using the API and Oauth, and set up my SoundCloud application to get my Consumer Keys, but I can't start the OAuth session. I'm using this library. I was wondering if maybe you could hand me some sample code for doing this. That would be great!!

    Read the article

  • Using Google Weather API with Lat and Lon - how to format?

    - by Paul
    I am wanting to use the Google Weather API - by passing lat and long values. However it seems Google is needing these formatted differently to the values I have stored. i.e. For the town of McTavish I have values of 45.5 and -73.583 This works here: http://api.wunderground.com/auto/wui/geo/WXCurrentObXML/index.xml?query=45.5,-73.583 But when I use Google it does not: See: www.google.com/ig/api?weather=,,,45.5,-73.583 Any help appreciated. I would prefer to use the Google Data.

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

  • more ruby way of gsub from array

    - by aharon
    My goal is to let there be x so that x("? world. what ? you say...", ['hello', 'do']) returns "hello world. what do you say...". I have something that works, but seems far from the "Ruby way": def x(str, arr, rep='?') i = 0 query.gsub(rep) { i+=1; arr[i-1] } end Is there a more idiomatic way of doing this? (Let me note that speed is the most important factor, of course.)

    Read the article

  • SQL Server Import table keeping default values

    - by Chrissi
    I am importing a table from one database to another in SQL Server 2008 by right-clicking the target database and choosing Tasks Import Data... When I import the table I get the column names and types and all the data fine, but I lose the primary key, identity specifications and all the default values that were set in the source table. So now I have to set all the default values for each column again manually. Is there any way to get the default values with the import, or even after with a Query? I am VERY new to this and flailing in the dark, so forgive me if this is a really stupid question...

    Read the article

  • Why would a TableAdapter populate a DataSet with "1/1/2000" for an entire timestamp column?

    - by Rob
    I have a TableAdapter filling a DataSet, and for some reason every select query populates my timestamp column with the value 1/1/2000 for every selected row. I first verified that original values are intact on the DB side; for the most part, they are, although it seems a few rows lost their original timestamp because of update queries performed programmatically before the issue was discovered. The DataColumn type is DateType, while the database (Postgres) column type is timestamp. Up until recently, this was all playing very nicely. I noticed the issue in a bound DataGridView control, and verified that this is not related to data binding by utilizing the 'Preview Data' option in the VS DataSet Editor. Usually when I notice unexpected values popping up in my application it's related to a mis-configured property, type conflict, or another silly mistake I've made. So after checking properties and types, and even recreating the TableAdapter from scratch, to say I'm a little baffled is an understatement. Does anyone have any ideas of what I could do to fix the issue and/or diagnose the cause?

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • Convert the code from PHP to Ruby

    - by theband
    public function getFtime() { $records=array(); $sql="SELECT * FROM `finishedtime`"; $result=mysql_query($sql); if(!$result){throw new Exception(mysql_error());} if(mysql_num_rows($result)==0){return $records;} while($row=mysql_fetch_assoc($result)){$records[]=$row;} return $records; } I am in the process of learning Ruby, can anyone convert this code into Ruby. This will make me construe on how to run a query and thrown the fetched result back.

    Read the article

  • php mysql database users connection handling

    - by aviv
    What is the best way to handle mysql database users connection in PHP? I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table. Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?

    Read the article

  • How to make a thread try to reconnect to the Database x times using JDBCTemplate

    - by gillJ
    Hi, I have a single thread trying to connect to a database using JDBCTemplate as follows: JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource); try{ jdbcTemplate.execute(new CallableStatementCreator() { @Override public CallableStatement createCallableStatement(Connection con) throws SQLException { return con.prepareCall(query); } }, new CallableStatementCallback() { @Override public Object doInCallableStatement(CallableStatement cs) throws SQLException { cs.setString(1, subscriberID); cs.execute(); return null; } }); } catch (DataAccessException dae) { throw new CougarFrameworkException( "Problem removing subscriber from events queue: " + subscriberID, dae); } I want to make sure that if the above code throws DataAccessException or SQLException, the thread waits a few seconds and tries to re-connect, say 5 more times and then gives up. How can I achieve this? Also, if during execution the database goes down and comes up again, how can i ensure that my program recovers from this and continues running instead of throwing an exception and exiting?

    Read the article

  • CREATE USER in MS Access 2010

    - by Anakela
    I have been searching for several hours regarding how to create a user using SQL for a database I am building in Access. I found several sources on Microsoft's website that say I can use the CREATE USER command to do this. However, whenever I attempt to run the query, an error saying Syntax error in CREATE TABLE statement pops up. What am I doing wrong? Thank you in advance for your help! If you're interested, the code format I am attempting to use is as follows: CREATE USER username, password, pid.

    Read the article

  • Track mass email campaigns

    - by daeliur
    Litmus released an email analytics service last month (may 2010). See here: http://litmusapp.com/email-analytics They boast a very cool "read rate" tracking: they can track normal reads, Skims, and Glanced/Deleted. How can they track skims and glanced/deleted? This to me seems impossible :) They also track forwards and prints. Prints are easy (they include a css @media print query with a bg image). But forwards? I think this might be a combo between subsequent opens and different IPs/reffering URLs. However, this means that if I open my mail and re-read it from another computer, it counts as a forward. Any ideas on this one? To summarize: Litmus Email Analytics says they can track email reads, skims, glanced/deleted, prints and forwards. How do they do it (skims, glanced/deleted and forwards)?

    Read the article

  • Subquery works in 9i but not in 11g

    - by Zsuetam
    Statement below is working on Oracle 9i but not on Oracle 11g SELECT * FROM ( SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL UNION ALL SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL ) WHERE zz IS NOT NULL AND TO_CHAR (hh) NOT IN ( SELECT DECODE ( scrnfail_rate, 0, -1, ROUND (LEVEL * 1 / (scrnfail_rate / 100)) - ROUND (1 / (2 * (scrnfail_rate / 100))) ) AS nno FROM DUAL WHERE NVL (scrnfail_rate, 0) > 0 CONNECT BY LEVEL <= ROUND(9 * scrnfail_rate / 100) ) It looks like Oracle 11g is ignoring where decode or even where clause in the subquery. This query should return two rows as it does on Oracle 9i, but results ORA-01476: divisor is equal to zero on Oracle 11g EE 11.2.0.1.0 - 64bit. Can anyone help? Thanks!

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • can oracle types be updated like tables?

    - by Omnipresent
    I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example: UPDATE my_gtt_1 c SET (street, city, STATE, zip) = (SELECT src.unit_address, src.unit_city, src.unit_state, src.unit_zip_code FROM (SELECT mbr.ROWID row_id, unit_address, RTRIM(a.unit_city) unit_city, RTRIM(a.unit_state) unit_state, RTRIM(a.unit_zip_code) unit_zip_code FROM table_1 b, table_2 a, my_gtt_1 mbr WHERE type = 'ABC' AND id = b.ssn_head AND a.h_id = b.h_id AND row_id >= v_start_row AND row_id <= v_end_row) src WHERE c.ROWID = src.row_id) WHERE state IS NULL OR state = ' '; if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Using RDL files in Web ReportViewer

    - by user54064
    I want to use a rdl file with the query information stored in it. I don't want to have to convert it to a rdlc file. I have an ASP.NET app that I want to show the report. I thought I would use a ReportViewer on my page and then have it use the rdl file. However, I get an error and in researching it appears that I have to convert the file to an rdlc file. I don't want to strip out the data contained in the report. How can I show the report to the user by running the rdl report?

    Read the article

  • LINQ-to-XML to DataGridView: Cannot edit fields -- How to fix?

    - by Pretzel
    I am currently doing LINQ-to-XML and populating a DataGridView with my query just fine. The trouble I am running into is that once loaded into the DataGridView, the values appear to be Un-editable (ReadOnly). Here's my code: var barcodes = (from src in xmldoc.Descendants("Container") where src.Descendants().Count() > 0 select new { Id = (string)src.Element("Id"), Barcode = (string)src.Element("Barcode"), Quantity = float.Parse((string)src.Element("Quantity").Attribute("value")) }).Distinct(); dataGridView1.DataSource = barcodes.ToList(); I read somewhere that the "DataGridView will be in ReadOnly mode when you use Anonymous types." But I couldn't find an explanation why or exactly what to do about it. Any ideas?

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

< Previous Page | 747 748 749 750 751 752 753 754 755 756 757 758  | Next Page >