Search Results

Search found 3815 results on 153 pages for 'compact policy'.

Page 130/153 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • MySQL managing catalogue views

    - by Mark Lawrence
    A friend of mine has a catalogue that currently holds about 500 rows or 500 items. We are looking at ways that we can provide reports on the catalogue inclduing the number of times an item was viewed, and dates for when its viewed. His site is averaging around 25,000 page impressions per month and if we assumed for a minute that half of these were catalogue items then we'd assume roughly 12,000 catalogue items viewed each month. My question is the best way to manage item views in the database. First option is to insert the catalogue ID into a table and then increment the number of times its viewed. The advantage of this is its compact nature. There will only ever be as many rows in the table as there are catalogue items. `catalogue_id`, `views` The disadvantage is that no date information is being held, short of maintaining the last time an item was viewed. The second option is to insert a new row each time an item is viewed. `catalogue_id`, `timestamp` If we continue with the assumed figure of 12,000 item views that means adding 12,000 rows to the table each month, or 144,000 rows each year. The advantage of this is we know the number of times the item is viewed, and also the dates for when its viewed. The disadvantage is the size of the table. Is a table with 144,000 rows becoming too large for MySQL? Interested to hear any thoughts or suggestions on how to achieve this. Thanks.

    Read the article

  • Laravel4: Checking many-to-many relationship attribute even when it does not exist

    - by Simo A.
    This is my first time with Laravel and also Stackoverflow... I am developing a simple course management system. The main objects are User and Course, with a many-to-many relationship between them. The pivot table has an additional attribute, participant_role, which defines whether the user is a student or an instructor within that particular course. class Course extends Eloquent { public function users() { return $this->belongsToMany('User')->withPivot('participant_role')->withTimestamps(); } } However, there is an additional role, system admin, which can be defined on the User object. And this brings me mucho headache. In my blade view I have the following: @if ($course->pivot->participant_role == 'instructor') { // do something here... } This works fine when the user has been assigned to that particular $course and there is an entry for this user-course combination in the pivot table. However, the problem is the system administrator, who also should be able to access course details. The if-clause above gives Trying to get property of non-object error, apparently because there is no entry in the pivot table for system administrator (as the role is defined within the User object). I could probably solve the problem by using some off-the-shelf bundle for handling role-based permissions. Or I could (but don't want to) do something like this with two internal if-clauses: if (!empty($course->pivot)) { if ($course->pivot->participant_role == 'instructor') { // do something... } } Other options (suggested in partially similar entries in Stackoverflow) would include 1) reading all the pivot table entries to an array, and use in_array to check if a role exists, or even 2) SQL left join to do this on database level. However, ultimately I am looking for a compact one-line solution? Can I do anything similar to this, which unfortunately does not seem to work? if (! empty ($course->pivot->participant_role == 'instructor') ) { // do something... } The shorter the answer, the better :-). Thanks a lot!

    Read the article

  • How can I simplify this user interface?

    - by Bears will eat you
    I'm writing an internal-tools webapp; one of the central pages in this tool has a whole bunch of related commands the user can execute by clicking one of a number of buttons on the page, like this: Ideally, all of the buttons would fit on one line. Ordinarily I'd do this by changing each widget from a button with a (sometimes long) text label to a simple, compact icon - e.g. could be replaced by a familiar disk icon: Unfortunately, I don't think I can do this for every button on this particular page. Some of the command buttons just don't have good visual analogs - "VDS List". Or, if I needed to add another button in the future for some other kind of list, I'd need two icons that both communicate "list-ness" and which list. So, I'm still considering this option, but I don't love it. So it's come time for me to add yet another button to this section (don't you love internal tools?). There's not enough room on that single line to fit the new button. Aside from the icon solution I already mentioned, what would be a good* way to simplify/declutter/reduce or otherwise improve this UI? *As per Jakob Nielsen's article, I'd like to think that a dropdown menu is not the solution.

    Read the article

  • Find and Replace using Perl for a dynamic url based on wordpress post

    - by user1068544
    How do you find the following div using perl. The url and image location will consistently change based on the post url, so i need to use a wild card. I must use a regular expression because I am limited in what i can use due to the software i am using. http://community.autoblogged.com/entries/344640-common-search-and-replace-patterns <div class="tweetmeme_button" style="float: right; margin-left: 10px;"> <a href="http://api.tweetmeme.com/share?url=http%3A%2F%2Fjumpinblack.com%2F2011%2F11%2F25%2Fdrake-and-rick-ross-you-only-live-once-ep-mixtape-2011-download%2F"><br /> <img src="http://api.tweetmeme.com/imagebutton.gif?url=http%3A%2F%2Fjumpinblack.com%2F2011%2F11%2F25%2Fdrake-and-rick-ross-you-only-live-once-ep-mixtape-2011-download%2F&amp;source=jumpinblack1&amp;style=compact&amp;b=2" height="61" width="50" /><br /> </a> </div> I tried using <div class="tweetmeme_button" style="float: right; margin-left: 10px;">.*<\/div>

    Read the article

  • what's a good technique for building and running many similar unit tests?

    - by jcollum
    I have a test setup where I have many very similar unit tests that I need to run. For example, there are about 40 stored procedures that need to be checked for existence in the target environment. However I'd like all the tests to be grouped by their business unit. So there'd be 40 instances of a very similar TestMethod in 40 separate classes. Kinda lame. One other thing: each group of tests need to be in their own solution. So Business Unit A will have a solution called Tests.BusinessUnitA. I'm thinking that I can set this all up by passing a configuration object (with the name of the stored proc to check, among other things) to a TestRunner class. The problem is that I'm losing the atomicity of my unit tests. I wouldn't be able to run just one of the tests, I'd have to run all the tests in the TestRunner class. This is what the code looks like at this time. Sure, it's nice and compact, but if Test 8 fails, I have no way of running just Test 8. TestRunner runner = new TestRunner(config, this.TestContext); var runnerType = typeof(TestRunner); var methods = runnerType.GetMethods() .Where(x => x.GetCustomAttributes(typeof(TestMethodAttribute), false) .Count() > 0).ToArray(); foreach (var method in methods) { method.Invoke(runner, null); } So I'm looking for suggestions for making a group of unit tests that take in a configuration object but won't require me to generate many many TestMethods. This looks like it might require code-generation, but I'd like to solve it without that.

    Read the article

  • Initializing an object to all zeroes

    - by dash-tom-bang
    Oftentimes data structures' valid initialization is to set all members to zero. Even when programming in C++, one may need to interface with an external API for which this is the case. Is there any practical difference between: some_struct s; memset(s, 0, sizeof(s)); and simply some_struct s = { 0 }; Do folks find themselves using both, with a method for choosing which is more appropriate for a given application? For myself, as mostly a C++ programmer who doesn't use memset much, I'm never certain of the function signature so I find the second example is just easier to use in addition to being less typing, more compact, and maybe even more obvious since it says "this object is initialized to zero" right in the declaration rather than waiting for the next line of code and seeing, "oh, this object is zero initialized." When creating classes and structs in C++ I tend to use initialization lists; I'm curious about folks thoughts on the two "C style" initializations above rather than a comparison against what is available in C++ since I suspect many of us interface with C libraries even if we code mostly in C++ ourselves.

    Read the article

  • Selecting records in SQL that have the minimum value for that record based on another field

    - by Ryan
    I have a set of data, and while the number of fields and tables it joins with is quite complex, I believe I can distill my problem down using the required fields/tables here for illustration regarding this particular problem. I have three tables: ClientData, Sources, Prices Here is what my current query looks like before selecting the minimum value: select c.RecordID, c.Description, s.Source, p.Price, p.Type, p.Weight from ClientData c inner join Sources s ON c.RecordID = s.RecordID inner join Prices p ON s.SourceID = p.SourceID This produces the following result: RecordID Description Source Price Type Weight ============================================================= 001002003 ABC Common Stock Vendor 1 104.5 Close 1 001002003 ABC Common Stock Vendor 1 103 Bid 2 001002003 ABC Common Stock Vendor 2 106 Close 1 001002003 ABC Common Stock Vendor 2 100 Unknwn 0 111222333 DEF Preferred Stk Vendor 3 80 Bid 2 111222333 DEF Preferred Stk Vendor 3 82 Mid 3 111222333 DEF Preferred Stk Vendor 2 81 Ask 4 What I am trying to do is display prices that belong to the same record which have the minimum non-zero weight for that record (so the weight must be greater than 0, but it has to be the minimum from amongst the remaining weights). So in the above example, for record 001002003 I would want to show the close prices from Vendor 1 and Vendor 2 because they both have a weight of 1 (the minimum weight for that record). But for 111222333 I would want to show just the bid price from Vendor 3 because its weight of 2 is the minimum, non-zero for that record. The result that I'm after would like like: RecordID Description Source Price Type Weight ============================================================= 001002003 ABC Common Stock Vendor 1 104.5 Close 1 001002003 ABC Common Stock Vendor 2 106 Close 1 111222333 DEF Preferred Stk Vendor 3 80 Bid 2 Any ideas on how to achieve this? EDIT: This is for SQL Compact Edition.

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • Clonezilla restore from Samba - no 'restoredisk' option

    - by MT_Head
    I used a CloneZilla LiveCD to back up a couple of Windows machines to a Samba share. Now I'm trying to restore those images, and CloneZilla won't even give me the 'restoredisk' or 'restorepart' options on the menu. I'm guessing that this is because CZ isn't recognizing a valid image... but why? Here's a listing of the folder on the Samba share: -rwxrwxrwx 1 marc users 319 May 31 03:45 blkdev.list -rwxrwxrwx 1 marc users 5307 May 31 04:41 clonezilla-img -rwxrwxrwx 1 marc users 4 May 31 04:31 disk -rwxrwxrwx 1 marc users 16091 May 31 04:31 Info-dmi.txt -rwxrwxrwx 1 marc users 11029 May 31 04:31 Info-lshw.txt -rwxrwxrwx 1 marc users 1502 May 31 04:31 Info-lspci.txt -rwxrwxrwx 1 marc users 170 May 31 04:31 Info-packages.txt -rwxrwxrwx 1 marc users 80 May 31 04:41 Info-saved-by-cmd.txt -rwxrwxrwx 1 marc users 10 May 31 04:31 parts -rwxrwxrwx 1 marc users 2097152000 May 31 04:06 sda1.ntfs-ptcl-img.gz.aa -rwxrwxrwx 1 marc users 247361656 May 31 04:08 sda1.ntfs-ptcl-img.gz.ab -rwxrwxrwx 1 marc users 823182034 May 31 04:31 sda2.ntfs-ptcl-img.gz.aa -rwxrwxrwx 1 marc users 36 May 31 03:45 sda-chs.sf -rwxrwxrwx 1 marc users 31744 May 31 03:45 sda-hidden-data-after-mbr -rwxrwxrwx 1 marc users 512 May 31 03:45 sda-mbr -rwxrwxrwx 1 marc users 315 May 31 03:45 sda-pt.parted -rwxrwxrwx 1 marc users 285 May 31 03:45 sda-pt.parted.compact -rwxrwxrwx 1 marc users 259 May 31 03:45 sda-pt.sf (I've been experimenting with various permissions trying to get this to work; that's why they're currently all "rwxrwxrwx"...) I've got my CZ LiveCD stuck in a (different) machine with a 160GB SATA disk that I'm fine with overwriting; although CZ doesn't show a directory listing, it does show that the correct folder is mounted as /home/partimag. But a moment later, after selecting either Beginner or Expert, I'm only presented with the "savedisk", "saveparts", and "exit" options. What am I doing wrong? I am confident that the initial backup was successful; I can post the log if desired, or any other information that might be germane. Edit: I've copied the contents of the folder onto a 16GB USB stick and set THAT as /home/partimag. Still nothing. What the hell is CZ looking for?

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • Intermittent wired network issues in 14.04

    - by Tommy Brunn
    Since yesterday, my wired network connection has been dropping for a couple of seconds every 30 seconds or so. To my knowledge, I had not made any changes to my network. Output of ifconfig -a: ? ~ ifconfig -a eth0 Link encap:Ethernet HWaddr 6c:f0:49:b9:b1:7f inet addr:192.168.0.16 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:feb9:b17f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11597 errors:0 dropped:0 overruns:0 frame:0 TX packets:9783 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10101682 (10.1 MB) TX bytes:1215142 (1.2 MB) Interrupt:48 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:96691 errors:0 dropped:0 overruns:0 frame:0 TX packets:96691 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13594355 (13.5 MB) TX bytes:13594355 (13.5 MB) lspci |grep Ethernet: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) Pinging my router: ? ~ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.435 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.571 ms ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable 64 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=1.03 ms And the output of route: ? ~ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 * 255.255.255.0 U 1 0 0 eth0 Some messages from /var/logs/syslog: ? ~ tail -f /var/log/syslog Jun 6 10:37:34 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:34 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:37 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:37 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:39 lolbox dhclient: XMT: Solicit on eth0, interval 8660ms. Jun 6 10:37:39 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:39 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:47 lolbox dhclient: XMT: Solicit on eth0, interval 16820ms. Jun 6 10:37:47 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:47 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:04 lolbox dhclient: XMT: Solicit on eth0, interval 34410ms. Jun 6 10:38:04 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:04 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13045 Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:37:34 lolbox whoopsie[1133]: online Jun 6 10:38:16 lolbox whoopsie[1133]: offline Jun 6 10:38:16 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:38:16 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13044 Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:38:16 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:17 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:17 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:17 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:18 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13160 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13161 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:38:18 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:18 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:18 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:18 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:18 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:18 lolbox dhclient: All rights reserved. Jun 6 10:38:18 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:18 lolbox dhclient: Jun 6 10:38:19 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:19 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:19 lolbox dhclient: All rights reserved. Jun 6 10:38:19 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:19 lolbox dhclient: Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:38:19 lolbox dhclient: Bound to *:546 Jun 6 10:38:19 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:38:19 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv6 state changed nbi -> preinit6 Jun 6 10:38:19 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on Socket/fallback Jun 6 10:38:19 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3fc9376d) Jun 6 10:38:19 lolbox dhclient: XMT: Solicit on eth0, interval 1020ms. Jun 6 10:38:19 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:19 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:38:20 lolbox dhclient: bound to 192.168.0.16 -- renewal in 41481 seconds. Jun 6 10:38:20 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:38:20 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:38:20 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:38:20 lolbox dhclient: XMT: Solicit on eth0, interval 2110ms. Jun 6 10:38:20 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:20 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:38:21 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:38:21 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:38:21 lolbox whoopsie[1133]: online Jun 6 10:38:21 lolbox ntpdate[13217]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:38:21 lolbox ntpdate[13217]: no servers can be used, exiting Jun 6 10:38:22 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:38:22 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:38:22 lolbox dhclient: XMT: Solicit on eth0, interval 4080ms. Jun 6 10:38:22 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:22 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:26 lolbox dhclient: XMT: Solicit on eth0, interval 8450ms. Jun 6 10:38:26 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:26 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:35 lolbox dhclient: XMT: Solicit on eth0, interval 16630ms. Jun 6 10:38:35 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:35 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:51 lolbox dhclient: XMT: Solicit on eth0, interval 34860ms. Jun 6 10:38:51 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:51 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:58 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:38:58 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13161 Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:22 lolbox whoopsie[1133]: online Jun 6 10:39:04 lolbox whoopsie[1133]: offline Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:39:04 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:39:04 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13160 Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:39:04 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:05 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:05 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:05 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:06 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13270 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13271 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:39:07 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:07 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:07 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:07 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:07 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:07 lolbox dhclient: All rights reserved. Jun 6 10:39:07 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:07 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:08 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:08 lolbox dhclient: All rights reserved. Jun 6 10:39:08 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:08 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Bound to *:546 Jun 6 10:39:08 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:39:08 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:39:08 lolbox kernel: [ 1446.098590] type=1400 audit(1402043948.002:75): apparmor="DENIED" operation="signal" profile="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=13273 comm="nm-dhcp-client." requested_mask="send" denied_mask="send" signal=term peer="/sbin/dhclient" Jun 6 10:39:08 lolbox kernel: [ 1446.098599] type=1400 audit(1402043948.002:76): apparmor="DENIED" operation="signal" profile="/sbin/dhclient" pid=13273 comm="nm-dhcp-client." requested_mask="receive" denied_mask="receive" signal=term peer="/usr/lib/NetworkManager/nm-dhcp-client.action" Jun 6 10:39:08 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:39:08 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on Socket/fallback Jun 6 10:39:08 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3e0183b9) Jun 6 10:39:08 lolbox dhclient: XMT: Solicit on eth0, interval 1050ms. Jun 6 10:39:08 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:39:08 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:39:09 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:39:09 lolbox dhclient: bound to 192.168.0.16 -- renewal in 35498 seconds. Jun 6 10:39:09 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:39:09 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:39:09 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:10 lolbox dhclient: XMT: Solicit on eth0, interval 2180ms. Jun 6 10:39:10 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:10 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:39:10 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:39:10 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:39:10 lolbox whoopsie[1133]: online Jun 6 10:39:10 lolbox ntpdate[13339]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:39:10 lolbox ntpdate[13339]: no servers can be used, exiting Jun 6 10:39:11 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:39:11 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:39:12 lolbox dhclient: XMT: Solicit on eth0, interval 4350ms. Jun 6 10:39:12 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:12 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:16 lolbox dhclient: XMT: Solicit on eth0, interval 8740ms. Jun 6 10:39:16 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:16 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:17 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:17 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:25 lolbox dhclient: XMT: Solicit on eth0, interval 17610ms. Jun 6 10:39:25 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:25 lolbox dhclient: IA_NA status code NoAddrsAvail.

    Read the article

  • MS Ajax Libraries and Configured Assemblies

    - by smehaffie
    Use Case You have a brand new IIS servers that has .Net 3.5 installed and are migrating sites to the new servers.  In the process of migrating sites you come across some sites that get an error about the version of AJAX libraries being references in the web.config.  In the web.config all the entries reference 1.0.61025.0, but the older version of the AJAX libraries are not installed on the new servers, only the latest version is installed that comes with .Net 3.5.  So what are the options to fix this issue. Solutions 1) Install the older version of the AJAX Libraries: Although this works, IMO it is never a great idea to install an older version of a library after a newer version has been installed.  Plus, if all new application use the latest versions, is it worth the effort of installing the older version for a few legacy applications? 2) Update the web.config files so all references use latest version (3.5.0.0):  This option is very time consuming and error prone. In addition, you will also have to update any pages where there is a register tag for the older libraries as well.  This would require you to redeploy any application that have this issue. 3) Use the Configured Assembly capabilities of .Net (aka: Assembly Bindings) to make any application that uses the older AJAX libraries to use the new AJAX libraries.  IMO, this is the easiest, quickest and least invasive way to fix the issue.  Below are the steps to implement this fix. Solution #3 Do the following steps on the IIS servers that the issue is occurring.  The 2 assemblies that need assemblies bindings created are: System.Web.Extension & System.Web.Extensions.Design 1) Go to Start - > All Program -> Administrative Tools -> Microsoft .NET Framework 2.0 Configuration. 2) Right click on "Configured Assemblies" to view list of configured assemblies. 3) Left Click on right pane to bring up menu and choose "Add". 4) Make sure "Choose and assembly from the assembly cache is checked" and click the "Choose Assembly" button. 5) Choose System.Web.Extension (does not matter what version). 6) Click the "Finish" button. 7) Binding Policy Tab      - Enter Requested Version = 1.0.61025.0      - Enter New Version = 3.5.0.0 8) Repeat steps 2-7 for the System.Web.Extensions.Design assembly. --------------------------------------------------------------------------------------------------------------------------------------------------------- Note: If "Microsoft .NET Framework 2.0 Configuration does not exist under Admin tools use mmc to access it (see below) 1) Start -> Run -> Enter MMC 2) File - > Add/Remove Snap-In then Click "Add" button 3) Choose ".Net 2.0 Configuration" then click "Add" button and then the "Close" Button. 4) On "Add/Remove Snapin" windows click the "OK" Button. 5) Expand the tree on the right and you can start following the directions above for adding the configured assemblies. ---------------------------------------------------------------------------------------------------------------------------------------------------------

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • ASP.NET MVC 3: Razor’s @: and <text> syntax

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (today) In today’s post I’m going to discuss two useful syntactical features of the new Razor view-engine – the @: and <text> syntax support. Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post.  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a list of products: When run, it generates output like:   One of the techniques that Razor uses to implicitly identify when a code block ends is to look for tag/element content to denote the beginning of a content region.  For example, in the code snippet above Razor automatically treated the inner <li></li> block within our foreach loop as an HTML content block because it saw the opening <li> tag sequence and knew that it couldn’t be valid C#.  This particular technique – using tags to identify content blocks within code – is one of the key ingredients that makes Razor so clean and productive with scenarios involving HTML creation. Using @: to explicitly indicate the start of content Not all content container blocks start with a tag element tag, though, and there are scenarios where the Razor parser can’t implicitly detect a content block. Razor addresses this by enabling you to explicitly indicate the beginning of a line of content by using the @: character sequence within a code block.  The @: sequence indicates that the line of content that follows should be treated as a content block: As a more practical example, the below snippet demonstrates how we could output a “(Out of Stock!)” message next to our product name if the product is out of stock: Because I am not wrapping the (Out of Stock!) message in an HTML tag element, Razor can’t implicitly determine that the content within the @if block is the start of a content block.  We are using the @: character sequence to explicitly indicate that this line within our code block should be treated as content. Using Code Nuggets within @: content blocks In addition to outputting static content, you can also have code nuggets embedded within a content block that is initiated using a @: character sequence.  For example, we have two @: sequences in the code snippet below: Notice how within the second @: sequence we are emitting the number of units left within the content block (e.g. - “(Only 3 left!”). We are doing this by embedding a @p.UnitsInStock code nugget within the line of content. Multiple Lines of Content Razor makes it easy to have multiple lines of content wrapped in an HTML element.  For example, below the inner content of our @if container is wrapped in an HTML <p> element – which will cause Razor to treat it as content: For scenarios where the multiple lines of content are not wrapped by an outer HTML element, you can use multiple @: sequences: Alternatively, Razor also allows you to use a <text> element to explicitly identify content: The <text> tag is an element that is treated specially by Razor. It causes Razor to interpret the inner contents of the <text> block as content, and to not render the containing <text> tag element (meaning only the inner contents of the <text> element will be rendered – the tag itself will not).  This makes it convenient when you want to render multi-line content blocks that are not wrapped by an HTML element.  The <text> element can also optionally be used to denote single-lines of content, if you prefer it to the more concise @: sequence: The above code will render the same output as the @: version we looked at earlier.  Razor will automatically omit the <text> wrapping element from the output and just render the content within it.  Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s smart detection of <tag> elements to identify the beginning of content regions is one of the reasons that the Razor approach works so well with HTML generation scenarios, and it enables you to avoid having to explicitly mark the beginning/ending of content regions in about 95% of if/else and foreach scenarios. Razor’s @: and <text> syntax can then be used for scenarios where you want to avoid using an HTML element within a code container block, and need to more explicitly denote a content region. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Last week I was presented with a Microsoft MVP award in Virtual Machines – time to thank all who hel

    - by Liam Westley
    MVP in Virtual Machines Last week, on 1st April, I received an e-mail from Microsoft letting me know that I had been presented with a 2010 Microsoft® MVP Award for outstanding contributions in Virtual Machine technical communities during the past year.   It was an honour to be nominated, and is a great reflection on the vibrancy of the UK user group community which made this possible. Virtualisation for developers, not just IT Pros I consider it a special honour as my expertise in virtualisation is as a software developer utilising virtual machines to aid my software development, rather than an IT Pro who manages data centre and network infrastructure.  I’ve been on a minor mission over the past few years to enthuse developers in a topic usually seen as only for network admins, but which can make their life a whole lot easier once understood properly. Continuous learning is fun In 1676, the scientist Isaac Newton, in a letter to Robert Hooke used the phrase (http://www.phrases.org.uk/meanings/268025.html) ‘If I have seen a little further it is by standing on the shoulders of Giants’ I’m a nuclear physicist by education, so I am more than comfortable that any knowledge I have is based on the work of others.  Although far from a science, software development and IT is equally built upon the work of others. It’s one of the reasons I despise software patents. So in that sense this MVP award is a result of all the great minds that have provided virtualisation solutions for me to talk about.  I hope that I have always acknowledged those whose work I have used when blogging or giving presentations, and that I have executed my responsibility to share any knowledge gained as widely as possible. Thanks to all those who helped – a big thanks to the UK user group community I reckon this journey started in 2003 when I started attending a user group called the London .Net Users Group (http://www.dnug.org.uk) started by a nice chap called Ian Cooper. The great thing about Ian was that he always encouraged non professional speakers to take the stage at the user group, and my first ever presentation was on 30th September 2003; SQL Server CE 2.0 and the.NET Compact Framework. In 2005 Ian Cooper was on the committee for the first DeveloperDeveloperDeveloper! day, the free community conference held at Microsoft’s UK HQ in Thames Valley park in Reading.  He encouraged me to take part and so on 14th May 2005 I presented a talk previously given to the London .Net User Group on Simplifying access to multiple DB providers in .NET.  From that point on I definitely had the bug; presenting at DDD2, DDD3, groking at DDD4 and SQLBits I and after a break, DDD7, DDD Scotland and DDD8.  What definitely made me keen was the encouragement and infectious enthusiasm of some of the other DDD organisers; Craig Murphy, Barry Dorrans, Phil Winstanley and Colin Mackay. During the first few DDD events I met the Dave McMahon and Richard Costall from NxtGenUG who made it easy to start presenting at their user groups.  Along the way I’ve met a load of great user group organisers; Guy Smith-Ferrier of the .Net Developer Network, Jimmy Skowronski of GL.Net and the double act of Ray Booysen and Gavin Osborn behind what was Vista Squad and is now Edge UG. Final thanks to those who suggested virtualisation as a topic ... Final thanks have to go the people who inspired me to create my Virtualisation for Developers talk.  Toby Henderson (@holytshirt) ensured I took notice of Sun’s VirtualBox, Peter Ibbotson for being a fine sounding board at the Kew Railway over quite a few Adnam’s Broadside and to Guy Smith-Ferrier for allowing his user group to be the guinea pigs for the talk before it was seen at DDD7.  Thanks to all of you I now know much more about virtualisation than I would have thought possible and it continues to be great fun. Conclusion If this was an academy award acceptance speech I would have been cut off after the first few paragraphs, so well done if you made it this far.  I’ll be doing my best to do justice to the MVP award and the UK community.  I’m fortunate in having a new employer who considers presenting at user groups as a good thing, so don’t expect me to stop any time soon. If you’ve never seen me in action, then you can view the original DDD7 Virtualisation for Developers presentation (filmed by the Microsoft Channel 9 team) as part of the full DDD7 video list here, http://www.craigmurphy.com/blog/?p=1591.  Also thanks to Craig Murphy’s fine video work you can also view my latest DDD8 presentation on Commercial Software Development, here, http://vimeo.com/9216563 P.S. If I’ve missed anyone out, do feel free to lambast me in comments, it’s your duty.

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • 6 Reasons Why You Can’t Move Your Cell Phone To Any Carrier You Want

    - by Chris Hoffman
    You can buy a laptop or Wi-Fi tablet and use it on Wi-Fi anywhere in the world, so why are cell phones and devices with mobile data not portable between different cellular networks in the same country? Unlike with Wi-Fi, there are many different competing cellular network standards — both around the world and within countries. Cellular carriers also like locking you to their specific network and making it difficult to move. That’s what contracts are for. Phone Locking Many phones are sold locked to a specific network. When you buy a phone from a cellular carrier, they often lock that phone to their network so you can’t take it to a competitor’s network. That’s why you’ll often need to unlock a phone before you can move it to a different cellular provider or take it to a different country and use it on a local provider instead of roaming. Cellular carriers will generally unlock your phone for you as long as you’re no longer in a contract with them. However, unlocking a cell phone you’ve paid for without your carrier’s permission is currently a crime in the USA. GSM vs. CDMA Some cellular networks use the GSM (Global System for Mobile Communications) standard, while some use CDMA (Code-division multiple access). Worldwide, most cellular networks use GSM. In the USA, both GSM and CDMA are popular. Verizon, Sprint, and other carriers that use their networks use CDMA. AT&T, T-Mobile, and other carriers that use their networks are use GSM. These are two competing standards and are not interoperable. This means you can’t simply take a phone from Verizon to T-Mobile, or from AT&T to Sprint. These carriers have incompatible phones. CDMA Restrictions CDMA is more restricted than GSM. GSM phones have SIM cards. Simply open the phone, pop out the SIM card, and pop in a new SIM card to switch carriers. (In reality, it’s more complicated thanks to phone locking and other factors here.) CDMA phones don’t have removable modules like this. All CDMA phones ship locked to a specific network and you’d have to get both your old carrier and your new carrier to cooperate to switch phones between them. In reality, many people just consider CDMA phones eternally locked to a specific carrier. Frequencies Different cellular networks throughout the USA and the rest of the world use different frequencies. These radio frequencies have to be supported by your phone’s hardware or your phone simply can’t work on a network using those frequencies. Many GSM phones support three or four bands of frequencies — 900/1800/1900 MHz, 850/1800/1900 MHz, or 850/900/1800/1900 MHz. These are sometimes called “world phones” because they allow easier roaming. This allows the manufacturer to produce a phone that will support all GSM networks in the world and allows their customers to travel with those phones. If your phone doesn’t support the appropriate frequencies, it won’t work on certain networks. LTE Bands When it comes to newer, faster LTE networks, different frequencies are still a concern. LTE frequencies are generally known as “LTE bands.” To use a smartphone on a certain LTE network, that smartphone will have to support that LTE network’s frequency. Different models of phones are often created to work on different LTE networks around the world. However, phones are generally supporting more and more LTE networks and becoming more and more interoperable over time. SIM Card Sizes The SIM cards used in GSM phones come in different sizes. Newer phones use smaller SIM cards to save space and be more compact. This isn’t a big obstacle, as the different sizes of SIM cards — full-size SIM, mini-SIM, micro-SIM, and nano-SIM are actually compatible. The only difference between them is the size of the plastic card surrounding the SIM’s chip. The actual chip is the same size between all the SIM cards. This means you can take an old SIM card and cut the plastic off until it becomes a smaller-size SIM card that fits in a modern phone. Or, you can take a smaller-size SIM card and insert it into a tray so that it becomes a larger-size SIM card that fits in an older phone. Be aware that it’s very possible to damage your SIM card and make it not work properly by cutting it to the wrong dimensions. Your cellular carrier will often be able to cut your SIM card for you or give you a new one if you want to use an old SIM card in a new phone. Hopefully they won’t overcharge you for this service, too. Be sure to check what types of networks, frequencies, and LTE bands your phone supports before trying to move it between networks. You may have to buy a new phone when moving between certain cellular carriers. Image Credit: Morgan on Flickr, 22n on Flickr

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Throttling in OSB

    - by Knut Vatsendvik
    Technorati Tags: soa,integration,osb,throttling,overload protection A common problem with integration is the risk of overloading a particular web service. When the capacity of a web service is reached and it continues to accept connections, it will most likely start to deteriorate. Fortunately there are 2 techniques, with Oracle Service Bus, that you can apply for protecting this from happening. You can either limit the concurrent number of requests for a Business Service (outbound requests) or you can limit the number of threads processing the requests for a Proxy Service (inbound requests). Limiting the Concurrent Number of Requests Limiting the concurrent requests for a Business Service cannot be set at design time so you have to use the built-in Oracle Service Bus Administration Console to do it (/sbconsole). Follow these steps to enable it: In Change Center, click Create to start a new Session Select Project Explorer, and navigate to the Business Service you want to limit Select the Operational Settings tab of the View a Business Service page In this tab, under Throttling, select the Enable check box. By enabling throttling you Specify a value for Maximum Concurrency Specify a positive integer value for Throttling Queue to backlog messages that has exceeded the message concurrency limit Specify the maximum time in milliseconds for Message Expiration a message can spend in Throttling Queue Click Update Click Active in Change Center to active the new settings If you re-publish the service, it will not overwrite the settings. Only if the resource is renamed or moved, it will. Please note that a throttling queue is an in-memory queue. Messages that are placed in this queue are not recoverable when a server fails or when you restart a server. Limiting the Number of Threads A better approach, in my opinion, is to limit the number of threads that can work with request. Follow these steps to do it: Open the WebLogic Server Console (/console) In Change Center, click Create to start a new Session In the left pane expand Environment and select Work Managers In the Global Work Managers page, click New    Click the Work Manager radio button, then click Next Enter a Name for the new Work Manager, and click Next In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish. The new Work Manager now appears in the Global Work Managers page. Select the new Work Manager Right next to the Maximum Threads Constraint drop-down box, click New   Click the Maximum Threads Constraint radio button, then click Next Enter a Name and a thread Count to be the maximum size to allocate for requests. Click Next  In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish Click Save Click Active in Change Center to active your changes.  A restart may be necessary.   Puh! Almost there. Start a new session. Go to the Service Bus Console (/sbconsole) and find your consuming Proxy Service. Click the Edit button of the Transport Configuration tab. Click Next Set the Dispatch Policy to the new Work Manager Click Last Click Save Click Active in Change Center to active your changes. 

    Read the article

  • Backing up my Windows Home Server to the Cloud&hellip;

    - by eddraper
    Ok, here’s my scenario: Windows Home Server with a little over 3TB of storage.  This includes many years of our home network’s PC backups, music, videos, etcetera. I’d like to get a backup off-site, and the existing APIs and apps such as CloudBerry Labs WHS Backup service are making it easy.  Now, all it’s down to is vendor and the cost of the actual storage.   So,  I thought I’d take a lazy Saturday morning and do some research on this and get the ball rolling.  What I discovered stunned me…   First off, the pricing for just about everything was loaded with complexity.  I learned that it wasn’t just about storage… it was about network usage, requests, sites, replication, and on and on. I really don’t see this as rocket science.  I have a disk image.  I want to put it in the cloud.  I’m not going to be be using it but once daily for incremental backups.  Sounds like a common scenario.  Yes, if “things get real” and my server goes down, I will need to bring down a lot of data and utilize a fair amount of vendor infrastructure.  However, this may never happen.  Offsite storage is an insurance policy.   The complexity of the cost structures, perhaps by design, create an environment where it’s incredibly hard to model bottom line costs and compare vendor all-up pricing.  As it is a “lazy Saturday morning,” I’m not in the mood for such antics and I decide to shirk the endeavor entirely.  Thus, I decided to simply fire up calc.exe and do some a simple arithmetic model based on price per GB.  I shuddered at the results.  Certainly something was wrong… did I misplace a decimal point?  Then I discovered CloudBerry’s own calculator.   Nope, I hadn’t misplaced those decimals after all.  Check it out (pricing based on 3174 GB):   Amazon S3 $398.00 per month $4761 per year Azure $396.75 per month $4761 per year Google $380.88 per month $4570.56 per year   Conclusion: Rampant crack smoking at vendors.  Seriously.  Out. Of. Their. Minds. Now, to Amazon’s credit, vision, and outright common sense, they had one offering which directly addresses my scenario:   Amazon Glacier $31.74 per month $380.88 per year   hmmm… It’s on the table.  Let’s see what it would cost to just buy some drives, an enclosure and cart them over to a friend’s house.   2 x 2TB Drives from NewEgg.com $199.99   Enclosure $39.99     $239.98   Carting data to back and forth to friend’s within walking distance pain   Leave drive unplugged at friend’s $0 for electricity   Possible data loss No way I can come and go every day.     I think I’ll think on this a bit more…

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • bluez 5.19 PS4 controller

    - by Athanase
    I currently have a problem when pairing my computer with a PS4 remote. On my Ubuntu 14.04 I removed everything related with bluez and bluetooth, and I built and installed bluez 5.19. Here are some useful command outputs: jean@system ~ hcitool hcitool - HCI Tool ver 5.19 jean@system ~ hcitool dev Devices: hci0 00:15:83:4C:0C:BB jean@system ~ bluetoothctl [bluetooth]# version Version 5.19 jean@system ~ bluetoothctl [NEW] Controller 00:15:83:4C:0C:BB BlueZ [default] jean@system ~ lsusb Bus 003 Device 012: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) So here is what happens. When I try to hard pair the controller with the computer, by holding the share and ps button for a while, everything works as expected and the pairing is done properly. After a hard pairing if I try the pairing by pressing the ps button only, nothings happen. In order to go it, I first power up the bluetooth dongle: jean@system ~ sudo hciconfig hciX up and then I run the bluetooh deamon bluetoothd: jean@system /usr/libexec/bluetooth ~ ./bluetoothd -d -n bluetoothd[11270]: Bluetooth daemon 5.19 bluetoothd[11270]: src/main.c:parse_config() parsing main.conf bluetoothd[11270]: src/main.c:parse_config() discovto=0 bluetoothd[11270]: src/main.c:parse_config() pairto=0 bluetoothd[11270]: src/main.c:parse_config() auto_to=60 bluetoothd[11270]: src/main.c:parse_config() name=%h-%d bluetoothd[11270]: src/main.c:parse_config() class=0x000100 bluetoothd[11270]: src/main.c:parse_config() Key file does not have key 'DeviceID' bluetoothd[11270]: src/gatt.c:gatt_init() Starting GATT server bluetoothd[11270]: src/adapter.c:adapter_init() sending read version command bluetoothd[11270]: Starting SDP server bluetoothd[11270]: src/sdpd-service.c:register_device_id() Adding device id record for 0002:1d6b:0246:0513 ... bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_sink_server_probe() path /org/bluez/hci0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_source_server_probe() path /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:btd_adapter_unblock_address() hci0 00:00:00:00:00:00 bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_C1_0D_2A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_88_5E_9A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:load_link_keys() hci0 keys 2 debug_keys 0 bluetoothd[11270]: src/adapter.c:load_ltks() hci0 keys 0 bluetoothd[11270]: src/adapter.c:load_connections() sending get connections command for index 0 bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: src/adapter.c:set_did() hci0 source 2 vendor 1d6b product 246 version 513 bluetoothd[11270]: src/adapter.c:adapter_register() Adapter /org/bluez/hci0 registered bluetoothd[11270]: src/adapter.c:set_dev_class() sending set device class command for index 0 bluetoothd[11270]: src/adapter.c:set_name() sending set local name command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:adapter_start() adapter /org/bluez/hci0 has been enabled bluetoothd[11270]: src/adapter.c:trigger_passive_scanning() bluetoothd[11270]: plugins/hostname.c:property_changed() static hostname: system bluetoothd[11270]: plugins/hostname.c:property_changed() pretty hostname: bluetoothd[11270]: plugins/hostname.c:update_name() name: system bluetoothd[11270]: src/adapter.c:adapter_set_name() name: system bluetoothd[11270]: plugins/hostname.c:property_changed() chassis: desktop bluetoothd[11270]: plugins/hostname.c:update_class() major: 0x01 minor: 0x01 bluetoothd[11270]: src/adapter.c:load_link_keys_complete() link keys loaded for hci0 bluetoothd[11270]: src/adapter.c:load_ltks_complete() LTKs loaded for hci0 bluetoothd[11270]: src/adapter.c:get_connections_complete() Connection count: 0 And then I press the ps button of the PS4 controller bluetoothd[11270]: src/adapter.c:connected_callback() hci0 device A4:15:66:C1:0D:2A connected eir_len 5 bluetoothd[11270]: profiles/input/server.c:connect_event_cb() Incoming connection from A4:15:66:C1:0D:2A on PSM 17 bluetoothd[11270]: profiles/input/device.c:input_device_set_channel() idev (nil) psm 17 bluetoothd[11270]: Refusing input device connect: No such file or directory (2) bluetoothd[11270]: profiles/input/server.c:confirm_event_cb() bluetoothd[11270]: Refusing connection from A4:15:66:C1:0D:2A: unknown device bluetoothd[11270]: src/adapter.c:dev_disconnected() Device A4:15:66:C1:0D:2A disconnected, reason 3 bluetoothd[11270]: src/adapter.c:adapter_remove_connection() bluetoothd[11270]: plugins/policy.c:disconnect_cb() reason 3 bluetoothd[11270]: src/adapter.c:bonding_attempt_complete() hci0 bdaddr A4:15:66:C1:0D:2A type 0 status 0xe bluetoothd[11270]: src/device.c:device_bonding_complete() bonding (nil) status 0x0e bluetoothd[11270]: src/device.c:device_bonding_failed() status 14 bluetoothd[11270]: src/adapter.c:resume_discovery() So I don't know what is happening here and a bit of help would be appreciated.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >