Search Results

Search found 65333 results on 2614 pages for 'real time goldengate odi'.

Page 76/2614 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to represent datetime of deifferent time zomes in C#

    - by Mohoch
    Hi. I have a .NET WebService (written in C#), that is supposed to serve people around the world. With each request I get the user's datetime in his own time zone with the format : "yyyy/MM/dd HH:mm ZZZZ". I have to convert the string to something representing the original date and time and specifying the time zone in GMT. I have to make some logical calculations and keep it in the database. The regular DateTime doe's not support this. it does not have a property specifying the time zone. When I try to convert my string into DateTime - it simply converts it to my local time. I do not want to keep my time in UTC, because I have some logic that has to run per user by his own time. Does anyone know a C# class that handles this? Thanks!

    Read the article

  • Can using Chronic impair your sense of time?

    - by Trip
    Haha.. I'm using Chronic to parse the time users add in the Calendar. Where the code works and implements the right time, the end result is that, IF a user adds a time, then it has no date, and because it has no date, it will not show in results. Any ideas? def set_dates unless self.natural_date.blank? || Chronic.parse(self.natural_date).blank? # check if we are dealing with a date or a date + time if time_provided?(self.natural_date) self.date = nil self.time = Chronic.parse(self.natural_date) else self.date = Chronic.parse(self.natural_date).to_date self.time = nil end end unless self.natural_end_date.blank? || Chronic.parse(self.natural_end_date).blank? # check if we are dealing with a date or a date + time if time_provided?(self.natural_end_date) self.end_date = nil self.end_time = Chronic.parse(self.natural_end_date) else self.end_date = Chronic.parse(self.natural_end_date).to_date self.end_time = nil end end end Edit: Here is the time_provided? method: def time_provided?(natural_date_string) date_span = Chronic.parse(natural_date_string, :guess => false) (date_span.last - date_span.first).to_i == 1 end

    Read the article

  • Can using Chronic impair you sense of time?

    - by Trip
    Haha.. I'm using Chronic to parse the time users add in the Calendar. Where the code works and implements the right time, the end result is that, IF a user adds a time, then it has no date, and because it has no date, it will not show in results. Any ideas? def set_dates unless self.natural_date.blank? || Chronic.parse(self.natural_date).blank? # check if we are dealing with a date or a date + time if time_provided?(self.natural_date) self.date = nil self.time = Chronic.parse(self.natural_date) else self.date = Chronic.parse(self.natural_date).to_date self.time = nil end end unless self.natural_end_date.blank? || Chronic.parse(self.natural_end_date).blank? # check if we are dealing with a date or a date + time if time_provided?(self.natural_end_date) self.end_date = nil self.end_time = Chronic.parse(self.natural_end_date) else self.end_date = Chronic.parse(self.natural_end_date).to_date self.end_time = nil end end end

    Read the article

  • How to represent datetime of different time zomes in C#

    - by Mohoch
    I have a .NET WebService (written in C#), that is supposed to serve people around the world. With each request I get the user's datetime in his own time zone with the format : yyyy/MM/dd HH:mm ZZZZ. I have to convert the string to something representing the original date and time and specifying the time zone in GMT. I have to make some logical calculations and keep it in the database. The regular DateTime doe's not support this. it does not have a property specifying the time zone. When I try to convert my string into DateTime - it simply converts it to my local time. I do not want to keep my time in UTC, because I have some logic that has to run per user by his own time. Does anyone know a C# class that handles this?

    Read the article

  • php time 2 hours wrong for only 50% some users

    - by user1797802
    I am having huge issues with php time. For some reason it shows a different time (by 2 hours) to some users and the correct time to other users. The code is H:i:s d-M-y T when I view the page in a browser from my PC it tells me its 11am when infact its 9am, when I check via a browser using one my RDP's I get the correct time. Both PC's are in the country (uk) both PC's have the same system time etc. Tried setting the timezone default, but no matter what I do the server still shows some users the correct time, and other users the time 2 hour forward, any ideas? the code is echo gmdate("H:i:s d-M-y T"); <?php echo gmdate("H:i:s d-M-y T"); ?>

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • How to access GMT time using VC

    - by sijith
    In my project i am using GetLocalTime() and GetSystemTime and set the current time into registry. But my problem is when i am changing the time of my machine the changed time only saving to registry. Is there any chance to access the GMT time so that its independent of machine and nobody can change the time.. Please give some help

    Read the article

  • Which run-time charting tools produce the best time axes?

    - by eft
    I need to generate and embed a time series chart into an ASP.NET application. Most run-time charting tools generate poor time axes, especially if the time scale is dynamic ie the user may choose to view data over a time scale of days, weeks, months or years. I'm looking for recommended tools that can be integrated into my app. Two that show promise are Chart Director and Google's annotated timeline visualization. Both are quite different in their implementation and have pros/cons.

    Read the article

  • UTC time stamp on Windows

    - by Arman
    I have a buffer with the UTC time stamp in C, I broadcast that buffer after every ten seconds. The problem is that the time difference between two packets is not consistent. After 5 to 10 iterations the time difference becomes 9, 11 and then again 10. Kindly help me to sort out this problem. I am using <time.h> for UTC time.

    Read the article

  • Display DateTime in GridView using user's time

    - by Rachel Martin
    I have a DateTime stored in UTC time that I'd like to display to the user in their local time from within a GridView control. How can I convert my DateTime to the user's time (not my server's local time)? Here is my current field as it appears in the GridView Columns collection: <asp:BoundField DataField="RunTime" HeaderText="Run Time" SortExpression="RunTime" DataFormatString="{0:f}" />

    Read the article

  • Using c# System.DateTime like c++ time_t

    - by David Boland
    In C++ I am able to get the current time when my application starts I can use time_t appStartTime = time(null); then to find the difference in seconds from when it started I can just do the same thing, then find the difference. It looks like I should be using "System.DateTime" in C# net, but the MSDN is confusing in its explanation. How can I use System.DateTime to find the difference in time (in seconds) between when my application starts, and the current time?

    Read the article

  • Rails time zone selector: intelligently selecting a default

    - by Tim Sullivan
    When signing up for an account on one of my apps, we need to store the time zone is in. We're using the time zone selector, which is fine, but I'd like to set the default value to something that it likely the user's current time zone. Is there an easy way, either on the server or using JavaScript, to set the time zone selector to the time zone the user is currently in?

    Read the article

  • converting timestamp to nanoseconds

    - by kuki
    I have a certain value of date and time say 28-3-2012(date) - 10:36:45(time) . I wish to convert this whole timestamp to nanoseconds with the precision of nanoseconds. As in the user would input the time and date as shown but internally i have to be accurate upto nanoseconds and convert the whole thing to nanoseconds to form a unique key assigned to a specific object created at that particular time. Could some one please help me with the same..

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Work time in fullcalendar [Solution]

    - by Zozo
    Full calendar have no included options to work-time feature (selecting first and last rows in agenda view for any day - where in example company is not working). I managed something like that: viewDisplay: function(view){ $.ajax({ url: 'index.php?r=calendar/Default/worktime', dataType: 'json', success: function(data){ if(view.name=='agendaWeek') selectWorkTime(data, 30, 0, 24, false); else if(view.name=='agendaDay') selectDayWorkTime(data, 30, 0, 24, view, false); } }); } Where index.php?r=calendar/Default/worktime is php file returning json. It looks like that: $arr = array( 'mon' => array('8:00', '17:00'), 'tue' => array('9:00', '15:00'), 'wed' => array('9:30', '19:00'), 'thu' => array('6:00', '14:00'), 'fri' => array('0:00', '24:00'), 'sat' => array('9:00', '14:00'), 'sun' => array() ); foreach ($arr as &$day){ foreach($day as &$hour){ $tmp = explode(':', $hour); $hour = $tmp[0] * 3600 + $tmp[1] * 60; } } print json_encode($arr); and at the end, some functions using for counting and selecting work-time: function selectDayWorkTime(timeArray, slotMinutes, minTime, maxTime, viewObject, showAtHolidays){ var dayname; $('.fc-content').find('.fc-view-agendaWeek').find('.fc-agenda-body') .children('.fc-work-time').remove(); $('.fc-content').find('.fc-view-agendaDay') .find('.fc-work-time-day').removeClass('fc-work-time-day'); switch(viewObject.start.getDay()){ case 1: dayname='mon'; break; case 2: dayname='tue'; break; case 3: dayname='wed'; break; case 4: dayname='thu'; break; case 5: dayname='fri'; break; case 6: dayname='sat'; break; case 0: dayname='sun'; break; } for(var day in timeArray){ if(day == dayname){ if($('.fc-content').find('.fc-view-agendaDay').find('.fc-'+day).attr('class').search('fc-holiday') == -1 || showAtHolidays){ var startBefore = 0; var endBefore = timeArray[day][0] / (60 * slotMinutes) - (minTime * 60) / slotMinutes; var startAfter = timeArray[day][1] / (60 * slotMinutes) - (minTime * 60) / slotMinutes; var endAfter = (maxTime - minTime) * 60 / slotMinutes - 1; for(startBefore; startBefore < endBefore; startBefore++){ $('.fc-view-agendaDay').find('.fc-slot'+startBefore).find('div').addClass('fc-work-time-day'); } for(startAfter; startAfter <= endAfter; startAfter++){ $('.fc-view-agendaDay').find('.fc-slot'+startAfter).find('div').addClass('fc-work-time-day'); } } } } } function selectWorkTime(timeArray, slotMinutes, minTime, maxTime, showAtHolidays){ for(var day in timeArray){ var startBefore = 0; var endBefore = timeArray[day][0] / (60 * slotMinutes) - (minTime * 60) / slotMinutes; var startAfter = timeArray[day][1] / (60 * slotMinutes) - (minTime * 60) / slotMinutes; var endAfter = (maxTime - minTime) * 60 / slotMinutes - 1; if(startBefore > endBefore) endBefore = startBefore; if(startAfter > endAfter) startAfter = endAfter; try{ selectCell(startBefore, endBefore, 'fc-'+day, 'fc-work-time', false, showAtHolidays); selectCell(startAfter, endAfter, 'fc-'+day, 'fc-work-time', true, showAtHolidays); } catch(e){ continue; } } } function selectCell(startRowNo, endRowNo, collClass, cellClass, closeGap, showAtHolidays){ $('.fc-content').find('.fc-view-agendaWeek').find('.fc-agenda-body') .children('.'+cellClass+''+startRowNo+''+collClass).remove(); $('.fc-content').find('.fc-view-agendaDay') .find('.fc-work-time-day').removeClass('fc-work-time-day'); if($('.fc-content').find('.fc-view-agendaWeek').find('.'+collClass).attr('class').search('fc-holiday') == -1 || showAtHolidays){ var width = $('.fc-content').find('.fc-view-agendaWeek') .find('.'+collClass+':last').width(); var height = 0; if(closeGap && (startRowNo != endRowNo)){ height = $('.fc-content').find('.fc-view-agendaWeek') .find('.fc-slot'+ startRowNo).height(); } $('.fc-view-agendaWeek').find('.fc-agenda-body').prepend('<div class="'+cellClass+' ' + ''+cellClass+''+startRowNo+''+collClass+'"></div>'); $('.'+cellClass).width(width - 2); height += $('.fc-content').find('.fc-view-agendaWeek') .find('.fc-slot'+ endRowNo).position().top - $('.fc-content').find('.fc-view-agendaWeek') .find('.fc-slot'+ startRowNo).position().top; $('.'+cellClass+''+startRowNo+''+collClass).height(height); $('.'+cellClass+''+startRowNo+''+collClass) .css('margin-top', $('.fc-content').find('.fc-view-agendaWeek') .find('.fc-slot'+ startRowNo).position().top); $('.'+cellClass+''+startRowNo+''+collClass) .css('margin-left', $('.fc-content').find('.fc-view-agendaWeek') .find('.'+collClass+':last').offset().left - width / 2); } } Don't forget about CSS: .fc-work-time-day{ background-color: yellow; opacity: 0.3; filter: alpha(opacity=30); /* for IE */ } .fc-work-time{ position: absolute; background-color: yellow; z-index:10; margin: 0; padding: 0; text-align: left; z-index: 0; opacity: 0.3; filter: alpha(opacity=30); /* for IE */ } So, I've got some questions about - is the other way to make the same, but no using absolute div's in agendaWeek? And... How can I get in viewDisplay function actual slotMinutes, minTime and maxTime

    Read the article

  • Motorola NVG510 bridge mode

    - by Blacklight Shining
    I have a Motorola NVG510 modem from AT&T, and I would like to disable all routing functions and use it as just a modem. I have a Time Capsule that's already connected and broadcasting its own wireless network, which is how I've been connecting (it's been reporting a double-NAT error, which I assume is from the NVG510 also acting as a router). I followed the instructions in question six here, and I can connect as I did before, but my Time Capsule still has a double-NAT error. How do I put the NVG510 into bridge mode or otherwise fix the double-NAT error? (No, ignoring it does not count as a fix.)

    Read the article

  • ./rtnet start rteth0-mac: unknown interface: No such device ioctl: No such device

    - by Anisha Kaul
    I have installed the RTnet over Xenomai. RTnet compiled well, and I also tested loopback on the single machine and was able to ping. However I noticed ./rtnet start showing the following output: What should I interpret when all it says is "no such device"? What more info should I provide here for you to help me in getting rid of this error? linux-y3pi:/usr/local/rtnet/sbin # ./rtnet start rteth0: unknown interface: No such device rteth0-mac: unknown interface: No such device ioctl: No such device ioctl: No such device ioctl: No such device ioctl: No such device ioctl (add): No such device vnic0: unknown interface: No such device SIOCSIFADDR: No such device vnic0: unknown interface: No such device SIOCSIFNETMASK: No such device Waiting for all slaves...ioctl: No such device ioctl: No such device linux-y3pi:/usr/local/rtnet/sbin # lsmod: linux-y3pi:/usr/local/rtnet/sbin # lsmod Module Size Used by tdma 18281 0 rtmac 9274 1 tdma rtcfg 49485 0 rtcap 7216 0 rt_loopback 1563 2 rtpacket 5517 0 rtudp 10655 0 rt_8139too 11374 0 rtipv4 22842 2 rtcfg,rtudp rtnet 42130 9 tdma,rtmac,rtcfg,rtcap,rt_loopback,rtpacket,rtudp,rt_8139too,rtipv4 ip6t_LOG 8480 6 xt_tcpudp 3540 2 xt_pkttype 1176 3 ipt_LOG 8201 6 xt_limit 2159 12 snd_pcm_oss 44878 0 snd_mixer_oss 15151 1 snd_pcm_oss snd_seq 55731 0 s nd_seq_device 6698 1 snd_seq edd 8407 0 ip6t_REJECT 4306 3 nf_conntrack_ipv6 8186 4 nf_defrag_ipv6 10128 1 nf_conntrack_ipv6 ip6table_raw 1451 1 xt_NOTRACK 1112 4 ipt_REJECT 2397 3 xt_state 1314 8 iptable_raw 1478 1 iptable_filter 1706 1 ip6table_mangle 1756 0 nf_conntrack_netbios_ns 1678 0 nf_conntrack_ipv4 8957 4 nf_conntrack 80411 5 nf_conntrack_ipv6,xt_NOTRACK,xt_state,nf_conntrack_netbios_ns,nf_conntrack_ipv4 nf_defrag_ipv4 1561 1 nf_conntrack_ipv4 ip_tables 18872 2 iptable_raw,iptable_filter ip6table_filter 1679 1 ip6_tables 19066 4 ip6t_LOG,ip6table_raw,ip6table_mangle,ip6table_filter x_tables 24094 16 ip6t_LOG,xt_tcpudp,xt_pkttype,ipt_LOG,xt_limit,ip6t_REJECT,ip6table_raw,xt_NOTRACK,ipt_REJECT,xt_state,iptable_raw,iptable_filter,ip6table_mangle,ip_tables,ip6table_filter,ip6_tables fuse 69279 3 loop 17417 0 dm_mod 71671 0 snd_hda_codec_via 57768 1 snd_hda_intel 24871 2 snd_hda_codec 95006 2 snd_hda_codec_via,snd_hda_intel snd_hwdep 6540 1 snd_hda_codec snd_pcm 90716 3 snd_pcm_oss,snd_hda_intel,snd_hda_codec snd_timer 22050 2 snd_seq,snd_pcm snd 71410 14 snd_pcm_oss,snd_mixer_oss,snd_seq,snd_seq_device,snd_hda_codec_via,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_timer soundcore 7854 1 snd iTCO_wdt 11716 0 iTCO_vendor_support 2942 1 iTCO_wdt snd_page_alloc 8324 2 snd_hda_intel,snd_pcm sr_mod 13186 0 cdrom 37628 1 sr_mod i2c_i801 9677 0 pcspkr 1950 0 sg 28847 0 serio_raw 4534 0 ext4 361361 2 jbd2 82943 1 ext4 crc16 1699 1 ext4 i915 500199 2 drm_kms_helper 33537 1 i915 drm 211193 3 i915,drm_kms_helper sd_mod 33977 5 i2c_algo_bit 5625 1 i915 intel_agp 11529 1 i915 intel_gtt 16397 3 i915,intel_agp ata_generic 3787 0 ata_piix 22875 4 ahci 20097 0 libahci 22089 1 ahci libata 194812 4 ata_generic,ata_piix,ahci,libahci scsi_mod 204709 4 sr_mod,sg,sd_mod,libata linux-y3pi:/usr/local/rtnet/sbin #

    Read the article

  • Adeos's role w.r.t Linux

    - by Anisha Kaul
    The event pipeline The fundamental Adeos structure one must keep in mind is the chain of client domains asking for event control. A domain is a kernelbased software component which can ask the Adeos layer to be notified of: · Every incoming external interrupt, or autogenerated virtual interrupt; · Every system call issued by Linux applications, · Other system events triggered by the kernel code (e.g. Linux task switching, signal notification, Linux task exits etc.). From: Life with Adeos: http://www.xenomai.org/documentation/xenomai-2.4/pdf/Life-with-Adeos-rev-B.pdf Question: Adeos is supposed to be between the hardware and the Linux kernel, I can understand about Adeos telling the Linux about hardware interrupts but Why should Adeos know about the "system call" issued by Linux?

    Read the article

  • Project management, timesheet and planning software

    - by hfidgen
    Hiya, I'm trying to find an integrated PM solution which will give my business all of the following: Timesheeting so we can track time spent on tasks Holiday planner (integrated with timesheet and project management Project management tool, integrating the above, with milestones, gantt chart, dependancies etc. Forecasting ability (nice to have, but not a requirement) Reporting capability - especially time spent on projects, costs etc. Now yeah, that's quite a lot of functionality, I appreciate that! But currently we've got 3 systems, none of which really talk to each other and it's a right headache. So far we've looked at: OpenWorkbench - not enough features Basecamp - not enough features and too reliant on online MS Project - too expensive? Can anyone throw some other hats into the ring which maybe I've not heard about? Really interested to hear how other people have approached this, it's not an unusual business requirement! Thanks!

    Read the article

  • Google Maps mashup for notes/househunting

    - by afray
    I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's "my maps" feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next. However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want. I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either. Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search. So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details.

    Read the article

  • Good C++ books regarding Performance?

    - by Leon
    Besides the books everyone knows about, like Meyer's 3 Effective C++/STL books, are there any other really good C++ books specifically aimed towards performance code? Maybe this is for gaming, telecommunications, finance/high frequency etc? When I say performance I mean things where a normal C++ book wouldnt bother advising because the gain in performance isn't worthwhile for 95% of C++ developers. Maybe suggestions like avoiding virtual pointers, going into great depth about inlining etc? A book going into great depth on C++ memory allocation or multithreading performance would obviously be very useful.

    Read the article

  • Log "date -s" command

    - by LinuxPenseur
    Hi, I know that the date -s <STRING> command sets the time described by the string STRING. What i want is to log the above command whenever it is used to set the time into the file /tmp/log/user.log. In my Linux distribution the logging is done by syslog-ng. I already have some logs going into /tmp/log/user.log. This is the content of /etc/syslog-ng/syslog-ng.conf in my system for logging into /tmp/log/user.log destination d_notice { file("/tmp/log/user.log");}; filter f_filter10 { level(notice) and not facility(mail,authpriv,cron); }; log { source(s_sys); filter(f_filter10); destination(d_notice); }; What should i do so that date -s command is also logged into /tmp/log/user.log

    Read the article

  • linux/shell: change a file's modify timestamp relatively?

    - by index
    My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this. Proposed algorithm: 1 read file's modify date 2 add delta, i.e. hhmmss (preferred: change timezone) 3 write new timestamp Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script. Any help appreciated!

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >