Search Results

Search found 10657 results on 427 pages for 'group'.

Page 371/427 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Provider not notified from cookbook_file

    - by wittyhandle
    I'm working on an ssl provider using Vagrant (1.0.5) and chef-solo (10.12.0) I have my provider, called ssl within a cookbook called gtm_cq, I define it as such in my cookbook's default recipe: gtm_cq_ssl "author" do # attributes will come later end I then have my cookbook_file like below that should notify my ssl provider's import action once it pushes the cert up to the server: cookbook_file "#{node[:cq][:ssl][:author_cert_location]}/foo.cer" do source "foo.cer" owner "crx" group "root" mode "0644" notifies :import, resources(:gtm_cq_ssl => "author") end When I run this, the foo.cer gets pushed up as expected, but the import action of my ssl provider is never called. The most I see of any reference is these couple of lines in the log (removed log headers): .. cookbook_file[/opt/cq5/author/foo.cer] sending import action to gtm_cq_ssl[author] (delayed) .. Processing gtm_cq_ssl[author] action import (gtm_cq::author line 34) There's a large very obvious log statement as well as the use of another cookbook_file for a test file to push something up to the server. No log statement, no test file pushed. I'm certain too that the foo.cer file is removed from the server before each test. I found that if I edit my notifies line like so with :immediately notifies :import, resources(:gtm_cq_ssl => "author"), :immediately It seems to work. And I suppose this is ok in my particular case, but it would seem something is not right if that's the only way I can call my provider. Any help on this would be greatly appreciated. Thanks!

    Read the article

  • Using DisplayFor inside a display template

    - by Oenotria
    I've created a htmlhelper extension to reduce the amount of repetitive markup when creating forms: public static MvcHtmlString RenderField<TModel, TValue>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TValue>> expression) { return htmlHelper.DisplayFor(expression, "formfield"); } The idea being that inside my views I can just write @Html.RenderField(x=>x.MyFieldName) and it will print the label and the field's content with the appropriate div tags in place already. Inside the displaytemplates folder I have created formfield.cshtml containing the following: <div class="display-group"> <div class="display-label"> @Html.LabelFor(x => x) </div> <div class="display-field"> @Html.DisplayFor(x => x) </div> </div> Unfortunately it doesn't appear that it is possible to nest DisplayFor inside a display template (it doesn't render anything). I don't want to just using @Model because then I won't get checkboxes for boolean values, calendar controls for dates etc. Is there a good way around this?

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • Prompting for authentication from a wxPython program and passing it along to IIS?

    - by MetaHyperBolic
    I have a client (written in Python, with a wxPython front end in dead-simple wizard style) which communicates a website running IIS. A python script receives requests and does the usual client-server dance. I would have written this as a browser application, but for the requirement that certain things happen on the local PC that the web can't help with (file manipulation, interfacing with certain USB hardware, etc.) Right now, I am simply using the logon credentials, compounded as a string from os.environ['USERDOMAIN'] and os.environ['USERNAME'], to pass along to the server, which connects to Active Directory and enumerates the members of the group, looking for those logon credentials. It's an ugly hack, but it works. Obviously, I could make people log out of the generic helper accounts and log back into Windows using specific accounts. However, I wondered how feasible it would be to provide some kind of logon prompt wherein the user can type in a name and password, then some kind of authorization token could be passed on to IIS. This seems like something I would not want to do myself, given that amateurs almost always make huge security mistakes. Now you can see why I am wishing this was purely web-based. What's a good way to handle this?

    Read the article

  • how to find missing rows in oracle

    - by user203212
    Hi, I have a table called 2 tables create table ORDERS ( ORDER_NO NUMBER(38,0) not null, ORDER_DATE DATE not null, SHIP_DATE DATE null, SHIPPING_METHOD VARCHAR2(12) null, TAX_STATUS CHAR(1) null, SUBTOTAL NUMBER null, TAX_AMT NUMBER null, SHIPPING_CHARGE NUMBER null, TOTAL_AMT NUMBER null, CUSTOMER_NO NUMBER(38,0) null, EMPLOYEE_NO NUMBER(38,0) null, BRANCH_NO NUMBER(38,0) null, constraint ORDERS_ORDERNO_PK primary key (ORDER_NO) ); and create table PAYMENTS ( PAYMENT_NO NUMBER(38,0) NOT NULL, CUSTOMER_NO NUMBER(38,0) null, ORDER_NO NUMBER(38,0) null, AMT_PAID NUMBER NULL, PAY_METHOD VARCHAR(10) NULL, DATE_PAID DATE NULL, LATE_DAYS NUMBER NULL, LATE_FEES NUMBER NULL, constraint PAYMENTS_PAYMENTNO_PK primary key (PAYMENT_NO) ); I am trying to find how many late orders each customer have. the column late_days in PAYMENTS table has how many days the customer is late for making payments for any particular order. so I am making this query SELECT C.CUSTOMER_NO, C.lname, C.fname, sysdate, COUNT(P.ORDER_NO) as number_LATE_ORDERS FROM CUSTOMER C, orders o, PAYMENTS P WHERE C.CUSTOMER_NO = o.CUSTOMER_NO AND P.order_no = o.order_no AND P.LATE_DAYS>0 group by C.CUSTOMER_NO, C.lname, C.fname That means, I am counting the orders those have any late payments and late_days0. But this is giving me only the customers who have any orders with late_days0, but the customers who does not have any late orders are not showing up. so if one customer has 5 orders with late payments then it is showing 5 for that customer, but if a customer have 0 late orders,that customer is not selected in this query. Is there any way to select all the customers , and if he has any late orders, it will show the number and also if he does not have any late orders, it will show 0.

    Read the article

  • What is the best way to go about grouping rows by the same timestamp?

    - by Luke
    Hello all. I am looking for some advice. I have rows of data in the database that i want to group together. There is a timestamp involved. That column is called date. What is the best way to go about grouping rows by the same timestamp. EDITED..... <? $result = mysql_query("SELECT * FROM ".TBL_FIXTURES." ORDER BY date"); $current_week = null; while ($row = mysql_fetch_assoc($result)) { if ($row['date'] != $current_week) { $current_week = $row['date']; echo 'Week ' . $current_week .': '; } echo $row['home_user']; echo $row['home_team']; echo $row['away_user']; echo $row['away_team']; } ?> I have this code. What i am trying to do is organise each round of fixtures in a row with a title Week 1 - date. I want Week 1 and the date and all fixtures with that date displayed. Then move onto week 2 and the date and all fixtures again. This should be done for every fixture in the database, so if there are 6 rounds of fixtures, there will be 6 dates and therefore 6 blocks of fixtures.. Please help, thanks

    Read the article

  • Is it possible to have a mysql table accept a null value for a primary_key column referencing a diff

    - by Dr.Dredel
    I have a table that has a column which holds the id of a row in another table. However, when table A is being populated, table B may or may not have a row ready for table A. My question is, is it possible to have mysql prevent an invalid value from being entered but be ok with a NULL? or does a foreign key necessitate a valid related value? So... what I'm looking for (in pseudo code) is this: Table "person" id | name Table "people" id | group_name | person_id (foreign key id from table person) insert into person (1, 'joe'); insert into people (1, 'foo', 1)//kosher insert into people (1, 'foo', NULL)//also kosher insert into people(1, 'foo', 7)// should fail since there is no id 7 in the person table. The reason I need this is that I'm having a chicken and egg issue where it makes perfect sense for the rows in the people table to be created before hand (in this example, I'm creating the groups and would like them to pre-exist the people who join them). And I realize that THIS example is silly and I would just put the group id in the person table rather than vice-versa, but in my real-world problem that is not workable. Just curious if I need to allow any and all values in order to make this work, or if there's some way to allow for null.

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Is it possible to have WAMP run httpd.exe as user [myself] instead of local SYSTEM?

    - by Olivier H
    Hello! I run a django application over apache with mod_wsgi, using WAMP. A certain URL allows me to stream the content of image files, the paths of which are stored in database. The files can be located whether on local machine or under network drive (\my\network\folder). With the development server (manage.py runserver), I have no trouble at all reading and streaming the files. With WAMP, and with network drive files, I get a IOError : obviously because the httpd instance does not have read permission on said drive. In the task manager, I see that httpd.exe is run by SYSTEM. I would like to tell WAMP to run the server as [myself] as I have read and write permissions on the shared folder. (eventually, the production server should be run by a 'www-admin' user having the permissions) Mapping the network shared folder on a drive letter (Z: for instance) does not solve this at all. The User/Group directives in httpd.conf do not seem to have any kind of influence on Apache's behaviour. I've also regedited : I tried to duplicate the HKLM[...]\wampapache registry key under HK_CURRENT_USER\ and rename the original key, but then the new key does not seem to be found when I cmd this > httpd.exe -n wampapache -k start or when I run WAMP. I've run out of ideas :) Has anybody ever had the same issue?

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • How to query multiple tables with multiple selects MySQL

    - by brybam
    I'm trying to write a php function that gets all the basic food data(this part works fine in my code) Then grabs all the ingredients related to the ids of the selected items from the query before. Anyone have have any idea why my second array keeps coming back as false? I should be getting 1 array with the list of foods (this works) Then, the second one should be an array with all the ingredients for all the foods that were previously selected...then in my code im planning to work with the array and sort them with the proper foods based on the ids. function getFood($start, $limit) { $one = mysql_query("SELECT a.id, a.name, a.type, AVG(b.r) AS fra, COUNT(b.id) as tvotes FROM `foods` a LEFT JOIN `foods_ratings` b ON a.id = b.id GROUP BY a.id ORDER BY fra DESC, tvotes DESC LIMIT $start, $limit;"); $row = mysql_fetch_array($one); $qry = ""; foreach ($row as &$value) { $fid = $value['id']; $qry = $qry . "SELECT ing, amount FROM foods_ing WHERE fid='$fid';"; } $two = mysql_query($qry); return array ($one, $two); }

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • php lampp permissions of fopen function

    - by marmoushismail
    hi i'm programming php using: netbeans 6.8 lampp for ubuntu (xampp) apache which came with xampp $fh = fopen("testfile2.txt", 'w') or die("Failed to create file"); $text ="hello man cool good"; fwrite($fh, $text) or die("Could not write to file"); fclose($fh); echo "File 'testfile.txt' written successfully"; //i get the next error: Warning: fopen(testfile2.txt) [function.fopen]: failed to open stream: Permission denied in /home/marmoush/allprojects/phpprojects/myindex.php on line 91 Failed to create file anyway i know what this error is; it's about folder and files permissions; i looked into the folder permission tab made access available for "others" group ( to read and write) the program worked result was a file (test.txt) so i looked at the created file permission it appears to be that (php , xampp or whoever) creates file with (nobody permission) I have 2 QUESTIONS: 1- what if i need the file created by (php code and xampp ) to have the "root or user or myname" permissions ?? where to set this setting 2-also my concern (what if i send this files to actual web server will it make nobody permissions also nobody ? when they create files

    Read the article

  • How do I put data from multiple records into different columns?

    - by Bryan
    My two tables are titled analyzed and analyzedCopy3. I'm trying to put information from analyzedCopy3 into multiple columns in analyzed. Sample data from analyzedCopy3: readings_miu_id OriginalCol ColRSSIz 110001366 Frederick Road -108 110001366 Steel Street 110001366 Fifth Ave. 110001508 Steel Street -104 What I want to do is put the top 3 OriginalCol, ColRSSIz combinations into columns that I have in the table analyzed. In analyzed there is only one record for each unique readings_miu_id. Any ideas? Thanks in advance. Additional Info: By "top 3 OriginalCol, ColRSSIz combinations" I mean the first 3 combinations with the highest value in the ColRSSIz column. For any readings_miu_id there could be anywhere from 1 row of information to 6 rows of information. So at most I'm only wanting the top 3. If there is less than 3 rows for the readings_miu_id then the other columns need to be blank. Query that generates the table "analyzed": strSql4 = " SELECT readings_miu_id, Count(readings_miu_id) as NumberOfReads, First(PercentSuccessz) as PercentSuccess, First(Readingz)as Reading, First(MIUwindowz) as MIUwindow, First(SNz) as SN, First(Noisez) as Noise, First(RSSIz) as RSSI, First(ColRSSIz) as ColRSSI, First(MIURSSIz) as MIURSSI, First(Col1z) as Col1, First(Col1RSSIz) as Col1RSSI, First(Col2z) as Col2, First(Col2RSSIz) as Col2RSSI, First(Col3z) as Col3, First(Col3RSSIz) as Col3RSSI, First(Firmwarez) as Firmware, First(CFGDatez) as CFGDate, First(FreqCorrz) as FreqCorr, First(Activez) as Active, First(MeterTypez) as MeterType, First(OriginColz) as OriginCol, First(ColIDz) as ColID, First(Ownagez) as Ownage, First(SiteIDz) as SiteID, First(PremIDz) as PremID, First(prem_group1z) as prem_group1, First(prem_group2z) as prem_group2, First(ReadIDz) as ReadID, First(prem_addr1z) as prem_addr1 " & _ "INTO analyzed " & _ "FROM analyzedCopy2 " & _ "GROUP BY readings_miu_id, PremIDz; " DoCmd.SetWarnings False DoCmd.RunSQL strSql4 DoCmd.SetWarnings True

    Read the article

  • iPhone: Same Rows Repeated in Each Section of Grouped UITableview

    - by Rank Beginner
    I have an app that is a list of tasks, like a to do list. The user configures the tasks and that goes to the SQLite db. The list is displayed in a tableview. The SQL table in question consists of a taskid int, groupname varchar, taskname varchar, lastcompleted datetime, nextdue datetime, weighting integer. I currently have it working by creating an array from each column in the SQL table. In the tableView:cellForRowAtIndexPath: method, I create the controls for each task by binding their values to the array for each column. I want to add configurable task groups that should display as the section titles. I got the task groups to display as the section headers. My problem is that all the task rows are repeated in each group under each header. How do I get the correct rows to show up only under the correct section? I'm really new to development period and took on a hobby of trying to teach myself how to develop iphone apps. So, pretty please, be a little more detailed than you normally would with a professional developer. :)

    Read the article

  • AJAX Request Erroring

    - by 30secondstosam
    I can not figure out for the life of me why this is erroring - can someone please cast a second pair of eyes over this - it's probably something really stupid, thanks in advance for the help: EDIT Stupidly I was putting the wrong URL in - HOWEVER... Now I have put the correct URL in, the site just hangs and crashes. Any ideas please? HTML: <input id="pc" class="sfc" name="ProdCat[]" type="checkbox" value=""> <input id="psc" class="sfc" name="ProdSubCat[]" type="checkbox" value=""> <input id="bf" class="sfc" name="BrandFil[]" type="checkbox" value=""> JQuery: $('input[type="checkbox"]').change(function(){ var name = $(this).attr("name"); var ProdCatFil = []; var ProdSubCatFil = []; var BrandFil = []; // Loop through the checked checkboxes in the same group // and add their values to an array $('input[type="checkbox"]:checked').each(function(){ switch(name) { case 'ProdCatFil[]': ProdCatFil.push($(this).val()); break; case 'ProdSubCatFil[]': ProdSubCatFil.push($(this).val()); break; case 'BrandFil[]': BrandFil.push($(this).val()); break; } }); $("#loading").ajaxStart(function(){ $(this).show(); $('.content_area').hide(); }); $("#loading").ajaxStop(function(){ $(this).hide(); $('.content_area').show(); }); $.ajax({ type: "GET", url: '../ajax/ajax.php', data: 'ProdCatFil='+ProdCatFil+'&ProdSubCatFil='+ProdSubCatFil+'&BrandFil='+BrandFil, success: function(data) { $('.content_area').html(data); } }).error(function (event, jqXHR, ajaxSettings, thrownError) { $('.content_area').html("<h2>Could not retrieve data</h2>"); //alert('[event:' + event + '], [jqXHR:' + jqXHR + '], [ajaxSettings:' + ajaxSettings + '], [thrownError:' + thrownError + '])'); }); PHP (just to prove it's working): echo '<h1>TEST TEST TEST </h1>'; The errors from JQuery alert box: [event:[object Object]], [jqXHR:error], [ajaxSettings:Not Found], [thrownError:undefined])

    Read the article

  • WP7 help with menu effects

    - by MattMacdonald
    Kinda new to Silverlight and have some experience with WPF but I'm doing a project with a group for a class making a game for WP7. I am currently in charge of the menu system for the game and I had a few ideas for "flashy" menu transitions. I got some going for the main menu but I wanted to do something cool for the options submenu. Anyway my idea is to either fashion an expander or to have sort of a variation of a dialog box. But the way I envisioned it would be in either case the menu items blur but are still visible while the expanded menu is displayed or while the dialog is active. If I'm being confusing sorry :) but think of Windows 7 glass effect on the menu while other options are available. What I'm getting at is I want to give this a shot but I have no idea how I would go about doing something like this. Could anyone point me in the right direction or outline some key steps for me to build off? I tried finding something like this on Google but no such luck.

    Read the article

  • Retrieving my own data via FaceBook API

    - by goggin13
    I am building a website for a comedy group which uses Facebook as one of their marketing platforms; one of the requirements for the new site is to display all of their Facebook events on a calendar. Currently, I am just trying to put together a Python script which can pull some data from my own Facebook account, like a list of all my friends. I presume once I can accomplish this I can move to pulling more complicated data out of my clients account (since they have given me access to their account). I have looked at many of the posts here, and also went through the Facebook API documentation, including Facebook Connect, but am really beating my head against the wall. Everything I have read seems like overkill, as it involves setting up a good deal of infrastructure to allow my app to set up connections to any arbitrary user's account (who authorizes me). Shouldn't it be much simpler, given I only ever need to access 1 account? I cannot find a way to retrieve data without having to display the Facebook login window. I have a script which will retrieve all my friends, but it includes a redirect where I have to physically log myself in to Facebook. Would appreciate any advice or links, I just feel like I must be missing something simple. Thank you!

    Read the article

  • Windows: Does something temporarily grab the com ports on startup?

    - by Tim
    I have a WPF/C# app that is launched as part of the "Startup" group on a Windows Embedded Standard machine. One of the first things the app does (in its static App() method) is create a new SerialPort object for COM1. COM1 is a hardwired serial port, not a USB virtual port or anything like that. My problem is that every so often (maybe 1 out of 12) on startup, I get an exception: System.UnauthorizedAccessException: Access to the port 'COM1' is denied. There are no other applications using this port. Also, when I relaunch the app following this error, it grabs the port just fine. It's as if the com port isn't ready/set up for my app sometimes. I'm clueless on this one! Any insight is appreciated! UPDATE: I added a call to SerialPort.GetPortNames() and printout all available ports before attempting to open the port. In the failure case COM1 is indeed THERE! So, it's not that the port isn't ready. It looks like something in Windows is actually grabbing the port temporarily and blocking me.

    Read the article

  • How to add a new entry to a multiple has_many association?

    - by siulamvictor
    I am not sure am I doing these correct. I have 3 models, Account, User, and Event. Account contains a group of Users. Each User have its own username and password for login, but they can access the same Account data under the same Account. Events is create by a User, which other Users in the same Account can also read or edit it. I created the following migrations and models. User migration class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.integer :account_id t.string :username t.string :password t.timestamps end end def self.down drop_table :users end end Account migration class CreateAccounts < ActiveRecord::Migration def self.up create_table :accounts do |t| t.string :name t.timestamps end end def self.down drop_table :accounts end end Event migration class CreateEvents < ActiveRecord::Migration def self.up create_table :events do |t| t.integer :account_id t.integer :user_id t.string :name t.string :location t.timestamps end end def self.down drop_table :events end end Account model class Account < ActiveRecord::Base has_many :users has_many :events end User model class User < ActiveRecord::Base belongs_to :account end Event model class Event < ActiveRecord::Base belongs_to :account belongs_to :user end so.... Is this setting correct? Every time when a user create a new account, the system will ask for the user information, e.g. username and password. How can I add them into correct tables? How can I add a new event? I am sorry for such a long question. I am not very understand the rails way in handling such data structure. Thank you guys for answering me. :)

    Read the article

  • MS-Access auto enter information based on date

    - by Desert Spider
    I have a query that calculates an employees anniversary date. I would like that query to generate an entry event for my Table based on the current date. Basically automatically generate an anniversary vacation accrual when their anniversary date comes. Here is an example of my table. Table name "SchedulingLog" LogID "PrimaryKey AutoNbr" UserID "Employee specific" LogDate EventDate Category "ex Vacation, Anniversary..." CatDetail "ex. Vacation Day Used, Anniversary..." Value "ex. -1, 1..." My query Query Name "qry_YOS" UserID "Employee Specific" DOH "Employee hire date" YearsOfService "calculated Field" Annual "calculated Field" Schedule "Employee Specific" Annual Vac Days "calculated field" Anniversary "calculated Field" Query associated SQL INSERT INTO schedulinglog (userid, [value], eventdate, logdate, category, catdetail) SELECT roster.userid, [annual] * [schedule] AS [Value], Month([wm doh]) & "/" & Day([wm doh]) & "/" & Year(DATE()) AS EventDate, DATE() AS LogDate, category.[category name] AS Category, catdetail.catdetail FROM roster, tblaccrual, category INNER JOIN catdetail ON category.categoryid = catdetail.categoryid WHERE (( ( [tblaccrual] ! [years] ) < Round(( DATE() - [wm doh] ) / 365, 2) )) GROUP BY roster.userid, roster.[wm doh], Round(( DATE() - [wm doh] ) / 365, 2), roster.schedule, Month([wm doh]) & "/" & Day([wm doh]) & "/" & Year(DATE()), DATE(), category.[category name], catdetail.catdetail HAVING ( ( ( category.[category name] ) LIKE "vacation*" ) AND ( ( catdetail.catdetail ) LIKE "anniversary*" ) ); I know it is possible I just dont know where to begin.

    Read the article

  • Question on SQL Grouping

    - by Lijo
    Hi Team, I am trying to achieve the following without using sub query. For a funding, I would like to select the latest Letter created date and the ‘earliest worklist created since letter created’ date for a funding. FundingId Leter (1, 1/1/2009 )(1, 5/5/2009) (1, 8/8/2009) (2, 3/3/2009) FundingId WorkList (1, 5/5/2009 ) (1, 9/9/2009) (1, 10/10/2009) (2, 2/2/2009) Expected Result - FundingId Leter WorkList (1, 8/8/2009, 9/9/2009) I wrote a query as follows. It has a bug. It will omit those FundingId for which the minimum WorkList date is less than latest Letter date (even though it has another worklist with greater than letter created date). CREATE TABLE #Funding( [Funding_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_No] [int] NOT NULL, CONSTRAINT [PK_Center_Center_ID] PRIMARY KEY NONCLUSTERED ([Funding_ID] ASC) ) ON [PRIMARY] CREATE TABLE #Letter( [Letter_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_Letter_Letter_ID] PRIMARY KEY NONCLUSTERED ([Letter_ID] ASC) ) ON [PRIMARY] CREATE TABLE #WorkList( [WorkList_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_WorkList_WorkList_ID] PRIMARY KEY NONCLUSTERED ([WorkList_ID] ASC) ) ON [PRIMARY] SELECT F.Funding_ID, Funding_No, MAX (L.CreatedDt), MIN(W.CreatedDt) FROM #Funding F INNER JOIN #Letter L ON L.Funding_ID = F.Funding_ID LEFT OUTER JOIN #WorkList W ON W.Funding_ID = F.Funding_ID GROUP BY F.Funding_ID,Funding_No HAVING MIN(W.CreatedDt) MAX (L.CreatedDt) How can I write a correct query without using subquery? Please help Thanks Lijo

    Read the article

  • Accelerometer stops delivering samples when the screen is off on Droid/Nexus One even with a WakeLoc

    - by William
    I have some code that extends a service and records onSensorChanged(SensorEvent event) accelerometer sensor readings on Android. I would like to be able to record these sensor readings even when the device is off (I'm careful with battery life and it's made obvious when it's running). While the screen is on the logging works fine on a 2.0.1 Motorola Droid and a 2.1 Nexus One. However, when the phone goes to sleep (by pushing the power button) the screen turns off and the onSensorChanged events stop being delivered (verified by using a Log.e message every N times onSensorChanged gets called). The service acquires a wakeLock to ensure that it keeps running in the background; but, it doesn't seem to have any effect. I've tried all the various PowerManager. wake locks but none of them seem to matter. _WakeLock = _PowerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "My Tag"); _WakeLock.acquire(); There have been conflicting reports about whether or not you can actually get data from the sensors while the screen is off... anyone have any experience with this on a more modern version of Android (Eclair) and hardware? This seems to indicate that it was working in Cupcake: http://groups.google.com/group/android-developers/msg/a616773b12c2d9e5 Thanks! PS: The exact same code works as intended in 1.5 on a G1. The logging continues when the screen turns off, when the application is in the background, etc.

    Read the article

  • Audio Streaming Latency

    - by killianmcc
    I'm writing a UDP local area network video chat system and have got the video and audio streams working. However I'm experiencing a little latency (about half a second) in the audio and was wondering what codecs would provide the least latency. I'm using NAudio (http://naudio.codeplex.com/) which provides me access to the following codecs for streaming; Speex Narrow Band (VBR) Speex Wide Band (16kHz)(VBR) Speex Ultra Wide Band (32kHz)(VBR) DSP Group TrueSpeech (8.5kbps) GSM 6.10 (13kbps) Microsoft ADPCM (32.8kbps) G.711 a-law (64kbps) G.722 16kHz (64kbps) G.711 mu-law (64kbps) PCM 8kHz 16 bit uncompressed (128kbps) I've tried them out and I'm not noticing much difference. Is there any others that I should download and try to reduce latency? I'm only going to be sending voice over the connection but I'm not really worried about quality or background noises too much. UPDATE I'm sending the audio in blocks like so; waveIn = new WaveIn(); waveIn.BufferMilliseconds = 50; waveIn.DeviceNumber = inputDeviceNumber; waveIn.WaveFormat = codec.RecordFormat; waveIn.DataAvailable += waveIn_DataAvailable; void waveIn_DataAvailable(object sender, WaveInEventArgs e) { if (connected) { byte[] encoded = codec.Encode(e.Buffer, 0, e.BytesRecorded); udpSender.Send(encoded, encoded.Length); } }

    Read the article

  • Further filter SQL results

    - by eric
    I've got a query that returns a proper result set, using SQL 2005. It is as follows: select case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end as [Quarter], bugtypes.bugtypename, count(bug.bugid) as [Total] from bug left outer join bugtypes on bug.crntbugtypeid = bugtypes.bugtypeid and bug.projectid = bugtypes.projectid where (bug.projectid = 44 and bug.currentowner in (-1000000031,-1000000045) and bug.crntplatformid in (42,37,25,14)) or (bug.projectid = 44 and bug.currentowner in (select memberid from groupmembers where projectid = 44 and groupid in (87,88)) and bug.crntplatformid in (42,37,25,14)) group by case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end, bugtypes.bugtypename order by 1,3 desc It produces a nicely grouped list of years and quarters, an associated descriptor, and a count of incidents in descending count order. What I'd like to do is further filter this so it shows only the 10 most submitted incidents per quarter. What I'm struggling with is how to take this result set and achieve that.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >