Hany Farid, Professor at UC Berkeley
Hany Farid, Professor at UC Berkeley
  • Видео 220
  • Просмотров 76 398
Deepfakes
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography.
At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of light with the physical 3-D world, the refraction of light as it passes through the camera lenses, the transformation of light to electrical signals in the camera sensor, and, finally, the conversion of electrical signals into a digital image file. This set of lectures focuses on the first part of t...
Просмотров: 668

Видео

Ballistic Motion
Просмотров 1509 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
3D Modelling
Просмотров 1349 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Specularities
Просмотров 699 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Lighting
Просмотров 899 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Human Abilities and Limits
Просмотров 939 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Lens Flare
Просмотров 829 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Shadows
Просмотров 1319 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Vanishing Points and Lines
Просмотров 3029 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Reflections
Просмотров 1329 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Introduction
Просмотров 1,1 тыс.9 месяцев назад
The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics seeks to restore some trust to photography. At their foundation, photo forensic techniques rely on understanding and modeling the imaging pipeline, from the interaction of ...
Closing: parting thoughts
Просмотров 126Год назад
Learn Computer Vision: These lectures introduce the theoretical and practical aspects of computer vision from the basics of the image formation process in digital cameras, through basic image processing, space/frequency representations, and techniques for image analysis, recognition, and understanding.
Image understanding: unsupervised learning: tSNE: implementation
Просмотров 183Год назад
Learn Computer Vision: These lectures introduce the theoretical and practical aspects of computer vision from the basics of the image formation process in digital cameras, through basic image processing, space/frequency representations, and techniques for image analysis, recognition, and understanding.
Image understanding: unsupervised learning: t-distributed stochastic neighbor embedding (tSNE)
Просмотров 158Год назад
Learn Computer Vision: These lectures introduce the theoretical and practical aspects of computer vision from the basics of the image formation process in digital cameras, through basic image processing, space/frequency representations, and techniques for image analysis, recognition, and understanding.
Image understanding: unsupervised learning: principal component analysis (PCA): eigenfaces
Просмотров 120Год назад
Learn Computer Vision: These lectures introduce the theoretical and practical aspects of computer vision from the basics of the image formation process in digital cameras, through basic image processing, space/frequency representations, and techniques for image analysis, recognition, and understanding.
Image understanding: unsupervised learning: principal component analysis (PCA): computation
Просмотров 86Год назад
Image understanding: unsupervised learning: principal component analysis (PCA): computation
Image understanding: unsupervised learning: principal component analysis (PCA): implementation
Просмотров 98Год назад
Image understanding: unsupervised learning: principal component analysis (PCA): implementation
Image understanding: unsupervised learning: principal component analysis (PCA): eigenvectors
Просмотров 105Год назад
Image understanding: unsupervised learning: principal component analysis (PCA): eigenvectors
Image understanding: unsupervised learning: principal component analysis (PCA): covariance matrix
Просмотров 150Год назад
Image understanding: unsupervised learning: principal component analysis (PCA): covariance matrix
Image understanding: unsupervised learning: principal component analysis (PCA): canonical basis
Просмотров 107Год назад
Image understanding: unsupervised learning: principal component analysis (PCA): canonical basis
Image understanding: unsupervised learning: expectation/maximization: EM implementation
Просмотров 55Год назад
Image understanding: unsupervised learning: expectation/maximization: EM implementation
Image understanding: unsupervised learning: expectation/maximization: M-step
Просмотров 54Год назад
Image understanding: unsupervised learning: expectation/maximization: M-step
Image understanding: unsupervised learning: expectation/maximization: E-step
Просмотров 67Год назад
Image understanding: unsupervised learning: expectation/maximization: E-step
Image understanding: unsupervised learning: expectation/maximization: EM
Просмотров 74Год назад
Image understanding: unsupervised learning: expectation/maximization: EM
Image understanding: unsupervised learning: clustering: k-means implementation
Просмотров 88Год назад
Image understanding: unsupervised learning: clustering: k-means implementation
Image understanding: unsupervised learning: clustering: k-means
Просмотров 129Год назад
Image understanding: unsupervised learning: clustering: k-means
Image understanding: supervised learning: classification: ANN: convolutional
Просмотров 93Год назад
Image understanding: supervised learning: classification: ANN: convolutional
Image understanding: supervised learning: classification: ANN: backpropagation
Просмотров 73Год назад
Image understanding: supervised learning: classification: ANN: backpropagation
Image understanding: supervised learning: classification: ANN: universal approximation theorem
Просмотров 63Год назад
Image understanding: supervised learning: classification: ANN: universal approximation theorem
Image understanding: supervised learning: classification: artificial neural networks: xor + hidden
Просмотров 60Год назад
Image understanding: supervised learning: classification: artificial neural networks: xor hidden

Комментарии

  • @ocamlmail
    @ocamlmail 19 дней назад

    Thank you, very clear and interesting explanation.

  • @Jia-Tan
    @Jia-Tan Месяц назад

    This was awesome. Thank you from a Computer Vision student in the UK!

  • @ocamlmail
    @ocamlmail Месяц назад

    1:54 -- what last section?

    • @hanyfarid5019
      @hanyfarid5019 Месяц назад

      See here for the full syllabus: farid.berkeley.edu/learnComputerVision/

  • @ocamlmail
    @ocamlmail Месяц назад

    Fantastic, thank you!!!

  • @lel3923
    @lel3923 Месяц назад

    Holy shit why doesnt this have more views

  • @afrinsultana703
    @afrinsultana703 Месяц назад

    can anyone give me the correct step for the laplacian pyramid?? 1. Take the image fi from stage i. 2. filter the image fi with a low pass filter and thus create the image Li 3. Downsample the image li and thus create the image fi+1 4. Calculate the difference image hi=fi-li 5. cache hi 6. Consolidate all images hi 7. Repeat the above step n times.

    • @hanyfarid5019
      @hanyfarid5019 Месяц назад

      Here is some Python code for computing a Laplacian pyramid: # Laplacian pyramid import matplotlib.pyplot as plt import numpy as np import cv2 from scipy.signal import sepfir2d im = plt.imread( 'mandrill.png' ) # load image h = [1/16,4/16,6/16,4/16,1/16]; # blur filter N = 4 # number of pyramid levels # Gaussian pyramid G = [] G.append(im) # first pyramid level for k in range(1,N): # pyramid levels im2 = np.zeros( im.shape ) for z in range(3): im2[:,:,z] = sepfir2d( im[:,:,z], h, h ) # blur each color channel im2 = im2[0:-1:2, 0:-1:2,:] # down-sample im = im2 G.append(im2) # Laplacian pyramid L = [] for k in range(0,N-1): # pyramid levels l1 = G[k] l2 = G[k+1] l2 = cv2.resize(l2, (0,0), fx=2, fy=2) # up-sample D = l1 - l2 D = D - np.min(D) # scale in [0,1] D = D / np.max(D) # for display purposes L.append(D) L.append(G[N-1]) # display pyramid fig, ax = plt.subplots(nrows=1, ncols=N, figsize=(15, 7), dpi=72, sharex=True, sharey=True) for k in range(N-1,-1,-1): ax[k].imshow(L[k])

    • @afrinsultana703
      @afrinsultana703 25 дней назад

      @@hanyfarid5019 Thank you so much for the reply would you please give me the correct algorithm step for this one?

  • @adarshkaran6611
    @adarshkaran6611 Месяц назад

    such a wonderful explanation! thank you!

  • @sucess7841
    @sucess7841 Месяц назад

    Can I get a lecture on SIFT?

    • @hanyfarid5019
      @hanyfarid5019 Месяц назад

      I don't have lecture on SIFT, but this is a lecture on the related HOG features: ruclips.net/video/RaaGoB8XnxM/видео.html

  • @jharris4854
    @jharris4854 Месяц назад

    I love how easy this is to understand. I guess there are multiple ways to accomplish a CE. In my text, its explained using XOR with NOR circuits to accomplish the same thing. Or, am I misunderstanding?

    • @hanyfarid5019
      @hanyfarid5019 Месяц назад

      You are correct. There are several different ways to create a CE circuit. The way I show is perhaps the most straight-forward, but definitely not the most efficient.

  • @UnforsakenXII
    @UnforsakenXII Месяц назад

    P=NP : ^ )

  • @naturebless
    @naturebless 2 месяца назад

    Much much better than the channels with millions of subscribers. I love it...❤❤

  • @robertwilson4117
    @robertwilson4117 2 месяца назад

    Why didnt I find this channel earlier...

  • @dariomaddaloni8220
    @dariomaddaloni8220 3 месяца назад

    Dear Professor Hany Farid, Thank you for sharing this fantastic course on image filtering! I find your explanations to be extremely clear and engaging. I have a quick question regarding linear time-invariant functions at around the 3:29 and 3:57 minute marks. The formulas shown use h[-3-k] and h[-4-k]. I was wondering if this might be a typo, as I would have expected h[-1-k] and h[0-k] to be used instead, or if I am missing something.

  • @hanyfarid5019
    @hanyfarid5019 3 месяца назад

    A viewer noticed that there is a bug at the 03:43 mark (nice catch) but I accidentally deleted their comment (sorry viewer). The code at this mark should read: M1z@M1z.T (not M1z@M2z.T)

  • @HelloWorlds__JTS
    @HelloWorlds__JTS 3 месяца назад

    Obvious typo on line 9 at 3:43: should be M1z@M1z.T + ...

    • @hanyfarid5019
      @hanyfarid5019 3 месяца назад

      Good catch. The code at this mark should read: M1z@M1z.T (not M1z@M2z.T)

  • @tomlee3454
    @tomlee3454 3 месяца назад

    Thanks Professor for your knowledge

  • @MatinRafiei
    @MatinRafiei 3 месяца назад

    Very clear explanation, thank you

  • @SoheilLotfi
    @SoheilLotfi 3 месяца назад

    This is the most comprehensive CV course I have ever seen. I was always looking for something like this but I always found general datascience courses. Thank you so much.

  • @elnaghy
    @elnaghy 3 месяца назад

    you is a legend

  • @ghazal_ggfornow
    @ghazal_ggfornow 3 месяца назад

    amazing point of view for explaining number of the features we want for a model won't change the cost function.

  • @gianlucanordio7200
    @gianlucanordio7200 3 месяца назад

    Wonderful lesson! Thank you

  • @shukkkursabzaliev1730
    @shukkkursabzaliev1730 3 месяца назад

    Amazing! Thank you

  • @thefirstspartan1
    @thefirstspartan1 3 месяца назад

    thank you for these lecture series, they are interesting

  • @bhuvanmangalore4483
    @bhuvanmangalore4483 4 месяца назад

    Can't be simpler. Best explanation

  • @mariaperaltaramos379
    @mariaperaltaramos379 4 месяца назад

    Thank you

  • @symphonyh4655
    @symphonyh4655 5 месяцев назад

    Wow the legend that you are. Thank you!!!

  • @mrtom-a-hawk6732
    @mrtom-a-hawk6732 5 месяцев назад

    Wow such a clear and concise video! And you make sure to review material that student should know! Amazing

  • @gregoriofreidin4683
    @gregoriofreidin4683 5 месяцев назад

    Awesome explanation!! As on all the videos. Thanks for the uploads! 🙌🙌

  • @user-nx2yy8xt3i
    @user-nx2yy8xt3i 5 месяцев назад

    Thank professor its really helpful

  • @VauRDeC
    @VauRDeC 5 месяцев назад

    Thank you for this real world problem exercise ! I guess any rigid motion may be explained as a "combination" that ?

  • @BigB00Bs
    @BigB00Bs 5 месяцев назад

    Why didn't anyone comment on a really good explanation?

  • @sandrocavallari4640
    @sandrocavallari4640 5 месяцев назад

    how do you generalize this model to a dataset that "don't pass through the origin" ? Do we need to "center" the data before using it ?

    • @hanyfarid5019
      @hanyfarid5019 5 месяцев назад

      Yes, the data can be centered (i.e., zero meaned) by subtracting the mean of each component.

    • @sandrocavallari4640
      @sandrocavallari4640 5 месяцев назад

      @@hanyfarid5019 thanks, really appreciated. your video are really cool !!

  • @arunjoseph9818
    @arunjoseph9818 6 месяцев назад

    Loved it!!

  • @sujithkumara8252
    @sujithkumara8252 6 месяцев назад

    Great work sir. I never see any one in you tube covers in this much depth❤😊your students will be lucky to have you

  • @MM-qt5dy
    @MM-qt5dy 6 месяцев назад

    Thank you professor for your effort this was really well explained, looking forward to learn more from this playlist

  • @gageshmadaan6819
    @gageshmadaan6819 6 месяцев назад

    at 6:21, may be the summation of impulses representation is not right if the unit impulse signal is considered to be centered.

    • @hanyfarid5019
      @hanyfarid5019 6 месяцев назад

      In this formulation, we assume the unit impulse falls on an integer sampling lattice.

  • @rma1563
    @rma1563 6 месяцев назад

    You are great at teaching. ❤

  • @saisureshmacharlavasu3116
    @saisureshmacharlavasu3116 6 месяцев назад

    VV Good

  • @freakyfrequency2530
    @freakyfrequency2530 7 месяцев назад

    awesome video thanks!

  • @deeejiii
    @deeejiii 7 месяцев назад

    how comes this channel has only 500 subs very valuable stuff here

  • @tonywang7933
    @tonywang7933 7 месяцев назад

    Hi Prof. Farid, At 3:50, since the "*" notation stands for convolution, shouldn't the filter be [1, -1]? Assuming right is the positive x direction The following is some python output: a = [0, 1, 0, 0] b = scipy.signal.convolve(a, [1, -1], mode='same') print(a) print(b) [0, 1, 0, 0] [ 0 1 -1 0] Similarly in 13:27, I think the d filter should be reversed.

    • @hanyfarid5019
      @hanyfarid5019 7 месяцев назад

      This depends on the specific implementation of convolution. Some implementations yield the results you see here in which the filter is flipped, while others don't flip the filter. Your code here is a good way to determine how to specify your filter values.

  • @lbridgetiv4
    @lbridgetiv4 7 месяцев назад

    Your videos are great! Thank you for sharing!

  • @tonywang7933
    @tonywang7933 7 месяцев назад

    What is the difference between grayscale intensity vs luminance from YCbCr?

    • @tonywang7933
      @tonywang7933 7 месяцев назад

      Thank you@@hanyfarid5019 . I experimented both, turned out they have very similar variance, some of them are even exact match, so I kept only grayscale for easier explanation

  • @alexandermread
    @alexandermread 7 месяцев назад

    This was a great video thank you

  • @gannaabdelhafiz5265
    @gannaabdelhafiz5265 7 месяцев назад

    Thank you!

  • @melihcankilic5918
    @melihcankilic5918 7 месяцев назад

    Thanks from Türkiye.

  • @tonywang7933
    @tonywang7933 8 месяцев назад

    Hi Prof. Farid: I was having great trouble of understanding what sepfir2d do, since intuitively we are talking about convolving but here we are using a different function(sepfir2d). Since skimage API doesn't provide much details, I have to try and guess for a few hours to really understand the the linkage between the two concept. I felt it would be better to show the code using signal.convolve2d, then show the equivalent but more computational efficient form would really help future student. The following python code is not a bulletproof test but enough to show my understanding is correct : import numpy as np from scipy import signal import random PRE_FILTER = np.array( [0.030320, 0.249724, 0.439911, 0.249724, 0.030320], dtype=np.float32, ) FIRST_ORDER_DERIVATIVE_FILTER = np.array( [-0.104550, -0.292315, 0.000000, 0.292315, 0.104550], dtype=np.float32, ) KERNEL_FIRST_ORDER_DERIVATIVE_FILTER = np.zeros((5, 5), dtype=np.float32) KERNEL_FIRST_ORDER_DERIVATIVE_FILTER[2, :] = FIRST_ORDER_DERIVATIVE_FILTER KERNEL_PRE_FILTER = np.zeros((5, 5), dtype=np.float32) KERNEL_PRE_FILTER[:, 2] = PRE_FILTER def test_sepfir2d_convolve2d(test_count: int): for _ in range(test_count): random_image = np.array(random.sample(range(0, 255), 100)).reshape(10, 10) sep_gradient_x = signal.sepfir2d(random_image, FIRST_ORDER_DERIVATIVE_FILTER, PRE_FILTER) conv_gradient_x_row = signal.convolve2d(random_image, KERNEL_FIRST_ORDER_DERIVATIVE_FILTER, mode="same", boundary="symm") conv_gradient_x = signal.convolve2d(conv_gradient_x_row, KERNEL_PRE_FILTER, mode="same", boundary="symm") print(f'sepfir2d and convole2d are equal: {np.array_equal(sep_gradient_x, conv_gradient_x)}') test_sepfir2d_convolve2d(50) returns all true for 50 random tested image

    • @hanyfarid5019
      @hanyfarid5019 8 месяцев назад

      Thanks for sharing this code.

  • @tonywang7933
    @tonywang7933 8 месяцев назад

    These videos are so good and clear, how come no one leave a comment?

  • @jaberfarahani6645
    @jaberfarahani6645 8 месяцев назад

    your videos are fantastic... I'm happy I found your channel ❤